00:00:00.001 Started by upstream project "autotest-per-patch" build number 127163 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "jbp-per-patch" build number 24313 00:00:00.001 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.100 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.101 The recommended git tool is: git 00:00:00.102 using credential 00000000-0000-0000-0000-000000000002 00:00:00.104 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.148 Fetching changes from the remote Git repository 00:00:00.150 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.199 Using shallow fetch with depth 1 00:00:00.199 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.199 > git --version # timeout=10 00:00:00.226 > git --version # 'git version 2.39.2' 00:00:00.226 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.249 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.249 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/09/24309/5 # timeout=5 00:00:04.932 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.946 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.959 Checking out Revision bd3e126a67c072de18fcd072f7502b1f7801d6ff (FETCH_HEAD) 00:00:04.959 > git config core.sparsecheckout # timeout=10 00:00:04.970 > git read-tree -mu HEAD # timeout=10 00:00:04.986 > git checkout -f bd3e126a67c072de18fcd072f7502b1f7801d6ff # timeout=5 00:00:05.005 Commit message: "jenkins/autotest: add raid-vg subjob to autotest configs" 00:00:05.005 > git rev-list --no-walk bd3e126a67c072de18fcd072f7502b1f7801d6ff # timeout=10 00:00:05.103 [Pipeline] Start of Pipeline 00:00:05.121 [Pipeline] library 00:00:05.123 Loading library shm_lib@master 00:00:05.123 Library shm_lib@master is cached. Copying from home. 00:00:05.139 [Pipeline] node 00:00:05.148 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:05.150 [Pipeline] { 00:00:05.160 [Pipeline] catchError 00:00:05.162 [Pipeline] { 00:00:05.173 [Pipeline] wrap 00:00:05.181 [Pipeline] { 00:00:05.187 [Pipeline] stage 00:00:05.188 [Pipeline] { (Prologue) 00:00:05.204 [Pipeline] echo 00:00:05.205 Node: VM-host-SM17 00:00:05.209 [Pipeline] cleanWs 00:00:05.219 [WS-CLEANUP] Deleting project workspace... 00:00:05.219 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.226 [WS-CLEANUP] done 00:00:05.422 [Pipeline] setCustomBuildProperty 00:00:05.528 [Pipeline] httpRequest 00:00:05.550 [Pipeline] echo 00:00:05.551 Sorcerer 10.211.164.101 is alive 00:00:05.558 [Pipeline] httpRequest 00:00:05.561 HttpMethod: GET 00:00:05.562 URL: http://10.211.164.101/packages/jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:05.562 Sending request to url: http://10.211.164.101/packages/jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:05.577 Response Code: HTTP/1.1 200 OK 00:00:05.577 Success: Status code 200 is in the accepted range: 200,404 00:00:05.578 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:11.890 [Pipeline] sh 00:00:12.170 + tar --no-same-owner -xf jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:12.184 [Pipeline] httpRequest 00:00:12.217 [Pipeline] echo 00:00:12.219 Sorcerer 10.211.164.101 is alive 00:00:12.227 [Pipeline] httpRequest 00:00:12.232 HttpMethod: GET 00:00:12.232 URL: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:12.233 Sending request to url: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:12.234 Response Code: HTTP/1.1 200 OK 00:00:12.235 Success: Status code 200 is in the accepted range: 200,404 00:00:12.235 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:31.253 [Pipeline] sh 00:00:31.532 + tar --no-same-owner -xf spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:34.823 [Pipeline] sh 00:00:35.102 + git -C spdk log --oneline -n5 00:00:35.102 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:00:35.102 fc2398dfa raid: clear base bdev configure_cb after executing 00:00:35.102 5558f3f50 raid: complete bdev_raid_create after sb is written 00:00:35.102 d005e023b raid: fix empty slot not updated in sb after resize 00:00:35.102 f41dbc235 nvme: always specify CC_CSS_NVM when CAP_CSS_IOCS is not set 00:00:35.120 [Pipeline] writeFile 00:00:35.135 [Pipeline] sh 00:00:35.440 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:35.451 [Pipeline] sh 00:00:35.728 + cat autorun-spdk.conf 00:00:35.728 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:35.728 SPDK_TEST_NVMF=1 00:00:35.728 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:35.728 SPDK_TEST_URING=1 00:00:35.728 SPDK_TEST_USDT=1 00:00:35.728 SPDK_RUN_UBSAN=1 00:00:35.728 NET_TYPE=virt 00:00:35.728 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:35.735 RUN_NIGHTLY=0 00:00:35.736 [Pipeline] } 00:00:35.753 [Pipeline] // stage 00:00:35.767 [Pipeline] stage 00:00:35.769 [Pipeline] { (Run VM) 00:00:35.782 [Pipeline] sh 00:00:36.059 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:36.059 + echo 'Start stage prepare_nvme.sh' 00:00:36.059 Start stage prepare_nvme.sh 00:00:36.059 + [[ -n 7 ]] 00:00:36.059 + disk_prefix=ex7 00:00:36.059 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:00:36.059 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:00:36.059 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:00:36.059 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:36.059 ++ SPDK_TEST_NVMF=1 00:00:36.059 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:36.059 ++ SPDK_TEST_URING=1 00:00:36.059 ++ SPDK_TEST_USDT=1 00:00:36.059 ++ SPDK_RUN_UBSAN=1 00:00:36.059 ++ NET_TYPE=virt 00:00:36.059 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:36.059 ++ RUN_NIGHTLY=0 00:00:36.059 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:36.059 + nvme_files=() 00:00:36.059 + declare -A nvme_files 00:00:36.059 + backend_dir=/var/lib/libvirt/images/backends 00:00:36.059 + nvme_files['nvme.img']=5G 00:00:36.059 + nvme_files['nvme-cmb.img']=5G 00:00:36.059 + nvme_files['nvme-multi0.img']=4G 00:00:36.059 + nvme_files['nvme-multi1.img']=4G 00:00:36.059 + nvme_files['nvme-multi2.img']=4G 00:00:36.059 + nvme_files['nvme-openstack.img']=8G 00:00:36.059 + nvme_files['nvme-zns.img']=5G 00:00:36.059 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:36.059 + (( SPDK_TEST_FTL == 1 )) 00:00:36.059 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:36.059 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:36.059 + for nvme in "${!nvme_files[@]}" 00:00:36.059 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:00:36.059 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:36.059 + for nvme in "${!nvme_files[@]}" 00:00:36.059 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:00:36.059 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:36.059 + for nvme in "${!nvme_files[@]}" 00:00:36.059 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:00:36.059 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:36.059 + for nvme in "${!nvme_files[@]}" 00:00:36.059 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:00:36.059 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:36.059 + for nvme in "${!nvme_files[@]}" 00:00:36.059 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:00:36.059 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:36.059 + for nvme in "${!nvme_files[@]}" 00:00:36.060 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:00:36.060 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:36.060 + for nvme in "${!nvme_files[@]}" 00:00:36.060 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:00:36.060 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:36.060 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:00:36.060 + echo 'End stage prepare_nvme.sh' 00:00:36.060 End stage prepare_nvme.sh 00:00:36.071 [Pipeline] sh 00:00:36.346 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:36.347 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora38 00:00:36.347 00:00:36.347 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:00:36.347 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:00:36.347 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:36.347 HELP=0 00:00:36.347 DRY_RUN=0 00:00:36.347 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:00:36.347 NVME_DISKS_TYPE=nvme,nvme, 00:00:36.347 NVME_AUTO_CREATE=0 00:00:36.347 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:00:36.347 NVME_CMB=,, 00:00:36.347 NVME_PMR=,, 00:00:36.347 NVME_ZNS=,, 00:00:36.347 NVME_MS=,, 00:00:36.347 NVME_FDP=,, 00:00:36.347 SPDK_VAGRANT_DISTRO=fedora38 00:00:36.347 SPDK_VAGRANT_VMCPU=10 00:00:36.347 SPDK_VAGRANT_VMRAM=12288 00:00:36.347 SPDK_VAGRANT_PROVIDER=libvirt 00:00:36.347 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:36.347 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:36.347 SPDK_OPENSTACK_NETWORK=0 00:00:36.347 VAGRANT_PACKAGE_BOX=0 00:00:36.347 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:36.347 FORCE_DISTRO=true 00:00:36.347 VAGRANT_BOX_VERSION= 00:00:36.347 EXTRA_VAGRANTFILES= 00:00:36.347 NIC_MODEL=e1000 00:00:36.347 00:00:36.347 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt' 00:00:36.347 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:38.914 Bringing machine 'default' up with 'libvirt' provider... 00:00:39.850 ==> default: Creating image (snapshot of base box volume). 00:00:39.850 ==> default: Creating domain with the following settings... 00:00:39.850 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721903949_f9110f8c0397c62ffd18 00:00:39.850 ==> default: -- Domain type: kvm 00:00:39.850 ==> default: -- Cpus: 10 00:00:39.850 ==> default: -- Feature: acpi 00:00:39.850 ==> default: -- Feature: apic 00:00:39.850 ==> default: -- Feature: pae 00:00:39.850 ==> default: -- Memory: 12288M 00:00:39.850 ==> default: -- Memory Backing: hugepages: 00:00:39.850 ==> default: -- Management MAC: 00:00:39.850 ==> default: -- Loader: 00:00:39.850 ==> default: -- Nvram: 00:00:39.850 ==> default: -- Base box: spdk/fedora38 00:00:39.850 ==> default: -- Storage pool: default 00:00:39.850 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721903949_f9110f8c0397c62ffd18.img (20G) 00:00:39.850 ==> default: -- Volume Cache: default 00:00:39.850 ==> default: -- Kernel: 00:00:39.850 ==> default: -- Initrd: 00:00:39.850 ==> default: -- Graphics Type: vnc 00:00:39.850 ==> default: -- Graphics Port: -1 00:00:39.850 ==> default: -- Graphics IP: 127.0.0.1 00:00:39.850 ==> default: -- Graphics Password: Not defined 00:00:39.850 ==> default: -- Video Type: cirrus 00:00:39.850 ==> default: -- Video VRAM: 9216 00:00:39.850 ==> default: -- Sound Type: 00:00:39.850 ==> default: -- Keymap: en-us 00:00:39.850 ==> default: -- TPM Path: 00:00:39.850 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:39.850 ==> default: -- Command line args: 00:00:39.850 ==> default: -> value=-device, 00:00:39.850 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:39.850 ==> default: -> value=-drive, 00:00:39.850 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:00:39.850 ==> default: -> value=-device, 00:00:39.850 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:39.850 ==> default: -> value=-device, 00:00:39.850 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:39.850 ==> default: -> value=-drive, 00:00:39.850 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:39.850 ==> default: -> value=-device, 00:00:39.850 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:39.850 ==> default: -> value=-drive, 00:00:39.850 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:39.850 ==> default: -> value=-device, 00:00:39.850 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:39.850 ==> default: -> value=-drive, 00:00:39.850 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:39.850 ==> default: -> value=-device, 00:00:39.850 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:40.109 ==> default: Creating shared folders metadata... 00:00:40.109 ==> default: Starting domain. 00:00:42.012 ==> default: Waiting for domain to get an IP address... 00:01:00.141 ==> default: Waiting for SSH to become available... 00:01:01.098 ==> default: Configuring and enabling network interfaces... 00:01:05.279 default: SSH address: 192.168.121.162:22 00:01:05.279 default: SSH username: vagrant 00:01:05.279 default: SSH auth method: private key 00:01:07.810 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:15.923 ==> default: Mounting SSHFS shared folder... 00:01:17.297 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:17.297 ==> default: Checking Mount.. 00:01:18.232 ==> default: Folder Successfully Mounted! 00:01:18.232 ==> default: Running provisioner: file... 00:01:19.167 default: ~/.gitconfig => .gitconfig 00:01:19.734 00:01:19.734 SUCCESS! 00:01:19.734 00:01:19.734 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:19.734 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:19.734 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:19.734 00:01:19.745 [Pipeline] } 00:01:19.764 [Pipeline] // stage 00:01:19.774 [Pipeline] dir 00:01:19.775 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt 00:01:19.777 [Pipeline] { 00:01:19.792 [Pipeline] catchError 00:01:19.794 [Pipeline] { 00:01:19.808 [Pipeline] sh 00:01:20.087 + vagrant ssh-config --host vagrant 00:01:20.087 + sed -ne /^Host/,$p 00:01:20.087 + tee ssh_conf 00:01:23.371 Host vagrant 00:01:23.371 HostName 192.168.121.162 00:01:23.371 User vagrant 00:01:23.371 Port 22 00:01:23.371 UserKnownHostsFile /dev/null 00:01:23.371 StrictHostKeyChecking no 00:01:23.371 PasswordAuthentication no 00:01:23.371 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:23.371 IdentitiesOnly yes 00:01:23.371 LogLevel FATAL 00:01:23.371 ForwardAgent yes 00:01:23.371 ForwardX11 yes 00:01:23.371 00:01:23.385 [Pipeline] withEnv 00:01:23.387 [Pipeline] { 00:01:23.404 [Pipeline] sh 00:01:23.684 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:23.684 source /etc/os-release 00:01:23.684 [[ -e /image.version ]] && img=$(< /image.version) 00:01:23.684 # Minimal, systemd-like check. 00:01:23.684 if [[ -e /.dockerenv ]]; then 00:01:23.684 # Clear garbage from the node's name: 00:01:23.684 # agt-er_autotest_547-896 -> autotest_547-896 00:01:23.684 # $HOSTNAME is the actual container id 00:01:23.684 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:23.684 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:23.684 # We can assume this is a mount from a host where container is running, 00:01:23.684 # so fetch its hostname to easily identify the target swarm worker. 00:01:23.684 container="$(< /etc/hostname) ($agent)" 00:01:23.684 else 00:01:23.684 # Fallback 00:01:23.684 container=$agent 00:01:23.684 fi 00:01:23.684 fi 00:01:23.684 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:23.684 00:01:23.951 [Pipeline] } 00:01:23.971 [Pipeline] // withEnv 00:01:23.980 [Pipeline] setCustomBuildProperty 00:01:23.995 [Pipeline] stage 00:01:23.997 [Pipeline] { (Tests) 00:01:24.016 [Pipeline] sh 00:01:24.295 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:24.567 [Pipeline] sh 00:01:24.842 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:24.874 [Pipeline] timeout 00:01:24.875 Timeout set to expire in 30 min 00:01:24.877 [Pipeline] { 00:01:24.898 [Pipeline] sh 00:01:25.175 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:25.742 HEAD is now at 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:01:25.756 [Pipeline] sh 00:01:26.035 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:26.307 [Pipeline] sh 00:01:26.586 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:26.858 [Pipeline] sh 00:01:27.135 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:27.135 ++ readlink -f spdk_repo 00:01:27.393 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:27.393 + [[ -n /home/vagrant/spdk_repo ]] 00:01:27.394 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:27.394 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:27.394 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:27.394 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:27.394 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:27.394 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:27.394 + cd /home/vagrant/spdk_repo 00:01:27.394 + source /etc/os-release 00:01:27.394 ++ NAME='Fedora Linux' 00:01:27.394 ++ VERSION='38 (Cloud Edition)' 00:01:27.394 ++ ID=fedora 00:01:27.394 ++ VERSION_ID=38 00:01:27.394 ++ VERSION_CODENAME= 00:01:27.394 ++ PLATFORM_ID=platform:f38 00:01:27.394 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:27.394 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:27.394 ++ LOGO=fedora-logo-icon 00:01:27.394 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:27.394 ++ HOME_URL=https://fedoraproject.org/ 00:01:27.394 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:27.394 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:27.394 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:27.394 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:27.394 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:27.394 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:27.394 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:27.394 ++ SUPPORT_END=2024-05-14 00:01:27.394 ++ VARIANT='Cloud Edition' 00:01:27.394 ++ VARIANT_ID=cloud 00:01:27.394 + uname -a 00:01:27.394 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:27.394 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:27.963 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:27.963 Hugepages 00:01:27.963 node hugesize free / total 00:01:27.963 node0 1048576kB 0 / 0 00:01:27.963 node0 2048kB 0 / 0 00:01:27.963 00:01:27.963 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:27.963 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:27.963 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:27.963 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:27.963 + rm -f /tmp/spdk-ld-path 00:01:27.963 + source autorun-spdk.conf 00:01:27.963 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.963 ++ SPDK_TEST_NVMF=1 00:01:27.963 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:27.963 ++ SPDK_TEST_URING=1 00:01:27.963 ++ SPDK_TEST_USDT=1 00:01:27.963 ++ SPDK_RUN_UBSAN=1 00:01:27.963 ++ NET_TYPE=virt 00:01:27.963 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:27.963 ++ RUN_NIGHTLY=0 00:01:27.963 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:27.963 + [[ -n '' ]] 00:01:27.963 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:27.963 + for M in /var/spdk/build-*-manifest.txt 00:01:27.963 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:27.963 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:27.963 + for M in /var/spdk/build-*-manifest.txt 00:01:27.963 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:27.963 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:27.963 ++ uname 00:01:27.963 + [[ Linux == \L\i\n\u\x ]] 00:01:27.963 + sudo dmesg -T 00:01:27.963 + sudo dmesg --clear 00:01:27.963 + dmesg_pid=5113 00:01:27.963 + sudo dmesg -Tw 00:01:27.963 + [[ Fedora Linux == FreeBSD ]] 00:01:27.963 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:27.963 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:27.963 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:27.963 + [[ -x /usr/src/fio-static/fio ]] 00:01:27.963 + export FIO_BIN=/usr/src/fio-static/fio 00:01:27.963 + FIO_BIN=/usr/src/fio-static/fio 00:01:27.963 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:27.963 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:27.963 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:27.963 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:27.963 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:27.963 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:27.963 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:27.963 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:27.963 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:27.963 Test configuration: 00:01:27.963 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.963 SPDK_TEST_NVMF=1 00:01:27.963 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:27.963 SPDK_TEST_URING=1 00:01:27.963 SPDK_TEST_USDT=1 00:01:27.963 SPDK_RUN_UBSAN=1 00:01:27.963 NET_TYPE=virt 00:01:27.963 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:28.222 RUN_NIGHTLY=0 10:39:57 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:28.222 10:39:57 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:28.222 10:39:57 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:28.222 10:39:57 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:28.222 10:39:57 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.222 10:39:57 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.222 10:39:57 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.222 10:39:57 -- paths/export.sh@5 -- $ export PATH 00:01:28.222 10:39:57 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.222 10:39:57 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:28.222 10:39:57 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:28.222 10:39:57 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721903997.XXXXXX 00:01:28.222 10:39:57 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721903997.HQqfEd 00:01:28.222 10:39:57 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:28.222 10:39:57 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:28.222 10:39:57 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:28.222 10:39:57 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:28.222 10:39:57 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:28.222 10:39:57 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:28.222 10:39:57 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:01:28.222 10:39:57 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.222 10:39:57 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:01:28.222 10:39:57 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:28.222 10:39:57 -- pm/common@17 -- $ local monitor 00:01:28.222 10:39:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.222 10:39:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.222 10:39:57 -- pm/common@25 -- $ sleep 1 00:01:28.222 10:39:57 -- pm/common@21 -- $ date +%s 00:01:28.222 10:39:57 -- pm/common@21 -- $ date +%s 00:01:28.222 10:39:57 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721903997 00:01:28.222 10:39:57 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721903997 00:01:28.222 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721903997_collect-cpu-load.pm.log 00:01:28.222 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721903997_collect-vmstat.pm.log 00:01:29.159 10:39:58 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:29.159 10:39:58 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:29.159 10:39:58 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:29.159 10:39:58 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:29.159 10:39:58 -- spdk/autobuild.sh@16 -- $ date -u 00:01:29.159 Thu Jul 25 10:39:58 AM UTC 2024 00:01:29.159 10:39:58 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:29.159 v24.09-pre-321-g704257090 00:01:29.159 10:39:58 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:29.159 10:39:58 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:29.159 10:39:58 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:29.159 10:39:58 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:29.159 10:39:58 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:29.159 10:39:58 -- common/autotest_common.sh@10 -- $ set +x 00:01:29.159 ************************************ 00:01:29.159 START TEST ubsan 00:01:29.159 ************************************ 00:01:29.159 using ubsan 00:01:29.159 10:39:58 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:29.159 00:01:29.159 real 0m0.000s 00:01:29.159 user 0m0.000s 00:01:29.159 sys 0m0.000s 00:01:29.159 10:39:58 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:29.159 10:39:58 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:29.159 ************************************ 00:01:29.159 END TEST ubsan 00:01:29.159 ************************************ 00:01:29.159 10:39:58 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:29.159 10:39:58 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:29.159 10:39:58 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:29.159 10:39:58 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:29.159 10:39:58 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:29.159 10:39:58 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:29.159 10:39:58 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:29.159 10:39:58 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:29.159 10:39:58 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:01:29.417 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:29.417 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:29.675 Using 'verbs' RDMA provider 00:01:45.509 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:57.732 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:57.732 Creating mk/config.mk...done. 00:01:57.732 Creating mk/cc.flags.mk...done. 00:01:57.732 Type 'make' to build. 00:01:57.732 10:40:26 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:01:57.732 10:40:26 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:57.732 10:40:26 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:57.732 10:40:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:57.732 ************************************ 00:01:57.732 START TEST make 00:01:57.732 ************************************ 00:01:57.732 10:40:26 make -- common/autotest_common.sh@1125 -- $ make -j10 00:01:57.732 make[1]: Nothing to be done for 'all'. 00:02:09.932 The Meson build system 00:02:09.932 Version: 1.3.1 00:02:09.932 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:09.932 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:09.932 Build type: native build 00:02:09.932 Program cat found: YES (/usr/bin/cat) 00:02:09.932 Project name: DPDK 00:02:09.932 Project version: 24.03.0 00:02:09.932 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:09.932 C linker for the host machine: cc ld.bfd 2.39-16 00:02:09.932 Host machine cpu family: x86_64 00:02:09.932 Host machine cpu: x86_64 00:02:09.932 Message: ## Building in Developer Mode ## 00:02:09.932 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:09.932 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:09.932 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:09.932 Program python3 found: YES (/usr/bin/python3) 00:02:09.932 Program cat found: YES (/usr/bin/cat) 00:02:09.932 Compiler for C supports arguments -march=native: YES 00:02:09.932 Checking for size of "void *" : 8 00:02:09.932 Checking for size of "void *" : 8 (cached) 00:02:09.932 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:09.932 Library m found: YES 00:02:09.932 Library numa found: YES 00:02:09.932 Has header "numaif.h" : YES 00:02:09.932 Library fdt found: NO 00:02:09.932 Library execinfo found: NO 00:02:09.932 Has header "execinfo.h" : YES 00:02:09.932 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:09.932 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:09.932 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:09.932 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:09.932 Run-time dependency openssl found: YES 3.0.9 00:02:09.932 Run-time dependency libpcap found: YES 1.10.4 00:02:09.932 Has header "pcap.h" with dependency libpcap: YES 00:02:09.932 Compiler for C supports arguments -Wcast-qual: YES 00:02:09.932 Compiler for C supports arguments -Wdeprecated: YES 00:02:09.932 Compiler for C supports arguments -Wformat: YES 00:02:09.932 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:09.932 Compiler for C supports arguments -Wformat-security: NO 00:02:09.932 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:09.932 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:09.932 Compiler for C supports arguments -Wnested-externs: YES 00:02:09.932 Compiler for C supports arguments -Wold-style-definition: YES 00:02:09.932 Compiler for C supports arguments -Wpointer-arith: YES 00:02:09.932 Compiler for C supports arguments -Wsign-compare: YES 00:02:09.932 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:09.932 Compiler for C supports arguments -Wundef: YES 00:02:09.932 Compiler for C supports arguments -Wwrite-strings: YES 00:02:09.932 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:09.932 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:09.932 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:09.932 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:09.932 Program objdump found: YES (/usr/bin/objdump) 00:02:09.932 Compiler for C supports arguments -mavx512f: YES 00:02:09.932 Checking if "AVX512 checking" compiles: YES 00:02:09.932 Fetching value of define "__SSE4_2__" : 1 00:02:09.932 Fetching value of define "__AES__" : 1 00:02:09.932 Fetching value of define "__AVX__" : 1 00:02:09.932 Fetching value of define "__AVX2__" : 1 00:02:09.932 Fetching value of define "__AVX512BW__" : (undefined) 00:02:09.932 Fetching value of define "__AVX512CD__" : (undefined) 00:02:09.932 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:09.932 Fetching value of define "__AVX512F__" : (undefined) 00:02:09.933 Fetching value of define "__AVX512VL__" : (undefined) 00:02:09.933 Fetching value of define "__PCLMUL__" : 1 00:02:09.933 Fetching value of define "__RDRND__" : 1 00:02:09.933 Fetching value of define "__RDSEED__" : 1 00:02:09.933 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:09.933 Fetching value of define "__znver1__" : (undefined) 00:02:09.933 Fetching value of define "__znver2__" : (undefined) 00:02:09.933 Fetching value of define "__znver3__" : (undefined) 00:02:09.933 Fetching value of define "__znver4__" : (undefined) 00:02:09.933 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:09.933 Message: lib/log: Defining dependency "log" 00:02:09.933 Message: lib/kvargs: Defining dependency "kvargs" 00:02:09.933 Message: lib/telemetry: Defining dependency "telemetry" 00:02:09.933 Checking for function "getentropy" : NO 00:02:09.933 Message: lib/eal: Defining dependency "eal" 00:02:09.933 Message: lib/ring: Defining dependency "ring" 00:02:09.933 Message: lib/rcu: Defining dependency "rcu" 00:02:09.933 Message: lib/mempool: Defining dependency "mempool" 00:02:09.933 Message: lib/mbuf: Defining dependency "mbuf" 00:02:09.933 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:09.933 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:09.933 Compiler for C supports arguments -mpclmul: YES 00:02:09.933 Compiler for C supports arguments -maes: YES 00:02:09.933 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:09.933 Compiler for C supports arguments -mavx512bw: YES 00:02:09.933 Compiler for C supports arguments -mavx512dq: YES 00:02:09.933 Compiler for C supports arguments -mavx512vl: YES 00:02:09.933 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:09.933 Compiler for C supports arguments -mavx2: YES 00:02:09.933 Compiler for C supports arguments -mavx: YES 00:02:09.933 Message: lib/net: Defining dependency "net" 00:02:09.933 Message: lib/meter: Defining dependency "meter" 00:02:09.933 Message: lib/ethdev: Defining dependency "ethdev" 00:02:09.933 Message: lib/pci: Defining dependency "pci" 00:02:09.933 Message: lib/cmdline: Defining dependency "cmdline" 00:02:09.933 Message: lib/hash: Defining dependency "hash" 00:02:09.933 Message: lib/timer: Defining dependency "timer" 00:02:09.933 Message: lib/compressdev: Defining dependency "compressdev" 00:02:09.933 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:09.933 Message: lib/dmadev: Defining dependency "dmadev" 00:02:09.933 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:09.933 Message: lib/power: Defining dependency "power" 00:02:09.933 Message: lib/reorder: Defining dependency "reorder" 00:02:09.933 Message: lib/security: Defining dependency "security" 00:02:09.933 Has header "linux/userfaultfd.h" : YES 00:02:09.933 Has header "linux/vduse.h" : YES 00:02:09.933 Message: lib/vhost: Defining dependency "vhost" 00:02:09.933 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:09.933 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:09.933 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:09.933 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:09.933 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:09.933 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:09.933 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:09.933 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:09.933 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:09.933 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:09.933 Program doxygen found: YES (/usr/bin/doxygen) 00:02:09.933 Configuring doxy-api-html.conf using configuration 00:02:09.933 Configuring doxy-api-man.conf using configuration 00:02:09.933 Program mandb found: YES (/usr/bin/mandb) 00:02:09.933 Program sphinx-build found: NO 00:02:09.933 Configuring rte_build_config.h using configuration 00:02:09.933 Message: 00:02:09.933 ================= 00:02:09.933 Applications Enabled 00:02:09.933 ================= 00:02:09.933 00:02:09.933 apps: 00:02:09.933 00:02:09.933 00:02:09.933 Message: 00:02:09.933 ================= 00:02:09.933 Libraries Enabled 00:02:09.933 ================= 00:02:09.933 00:02:09.933 libs: 00:02:09.933 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:09.933 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:09.933 cryptodev, dmadev, power, reorder, security, vhost, 00:02:09.933 00:02:09.933 Message: 00:02:09.933 =============== 00:02:09.933 Drivers Enabled 00:02:09.933 =============== 00:02:09.933 00:02:09.933 common: 00:02:09.933 00:02:09.933 bus: 00:02:09.933 pci, vdev, 00:02:09.933 mempool: 00:02:09.933 ring, 00:02:09.933 dma: 00:02:09.933 00:02:09.933 net: 00:02:09.933 00:02:09.933 crypto: 00:02:09.933 00:02:09.933 compress: 00:02:09.933 00:02:09.933 vdpa: 00:02:09.933 00:02:09.933 00:02:09.933 Message: 00:02:09.933 ================= 00:02:09.933 Content Skipped 00:02:09.933 ================= 00:02:09.933 00:02:09.933 apps: 00:02:09.933 dumpcap: explicitly disabled via build config 00:02:09.933 graph: explicitly disabled via build config 00:02:09.933 pdump: explicitly disabled via build config 00:02:09.933 proc-info: explicitly disabled via build config 00:02:09.933 test-acl: explicitly disabled via build config 00:02:09.933 test-bbdev: explicitly disabled via build config 00:02:09.933 test-cmdline: explicitly disabled via build config 00:02:09.933 test-compress-perf: explicitly disabled via build config 00:02:09.933 test-crypto-perf: explicitly disabled via build config 00:02:09.933 test-dma-perf: explicitly disabled via build config 00:02:09.933 test-eventdev: explicitly disabled via build config 00:02:09.933 test-fib: explicitly disabled via build config 00:02:09.933 test-flow-perf: explicitly disabled via build config 00:02:09.933 test-gpudev: explicitly disabled via build config 00:02:09.933 test-mldev: explicitly disabled via build config 00:02:09.933 test-pipeline: explicitly disabled via build config 00:02:09.933 test-pmd: explicitly disabled via build config 00:02:09.933 test-regex: explicitly disabled via build config 00:02:09.933 test-sad: explicitly disabled via build config 00:02:09.933 test-security-perf: explicitly disabled via build config 00:02:09.933 00:02:09.933 libs: 00:02:09.933 argparse: explicitly disabled via build config 00:02:09.933 metrics: explicitly disabled via build config 00:02:09.933 acl: explicitly disabled via build config 00:02:09.933 bbdev: explicitly disabled via build config 00:02:09.933 bitratestats: explicitly disabled via build config 00:02:09.933 bpf: explicitly disabled via build config 00:02:09.933 cfgfile: explicitly disabled via build config 00:02:09.933 distributor: explicitly disabled via build config 00:02:09.933 efd: explicitly disabled via build config 00:02:09.933 eventdev: explicitly disabled via build config 00:02:09.933 dispatcher: explicitly disabled via build config 00:02:09.933 gpudev: explicitly disabled via build config 00:02:09.933 gro: explicitly disabled via build config 00:02:09.933 gso: explicitly disabled via build config 00:02:09.933 ip_frag: explicitly disabled via build config 00:02:09.933 jobstats: explicitly disabled via build config 00:02:09.933 latencystats: explicitly disabled via build config 00:02:09.933 lpm: explicitly disabled via build config 00:02:09.933 member: explicitly disabled via build config 00:02:09.933 pcapng: explicitly disabled via build config 00:02:09.933 rawdev: explicitly disabled via build config 00:02:09.933 regexdev: explicitly disabled via build config 00:02:09.933 mldev: explicitly disabled via build config 00:02:09.933 rib: explicitly disabled via build config 00:02:09.933 sched: explicitly disabled via build config 00:02:09.933 stack: explicitly disabled via build config 00:02:09.933 ipsec: explicitly disabled via build config 00:02:09.933 pdcp: explicitly disabled via build config 00:02:09.933 fib: explicitly disabled via build config 00:02:09.933 port: explicitly disabled via build config 00:02:09.933 pdump: explicitly disabled via build config 00:02:09.933 table: explicitly disabled via build config 00:02:09.933 pipeline: explicitly disabled via build config 00:02:09.933 graph: explicitly disabled via build config 00:02:09.933 node: explicitly disabled via build config 00:02:09.933 00:02:09.933 drivers: 00:02:09.933 common/cpt: not in enabled drivers build config 00:02:09.933 common/dpaax: not in enabled drivers build config 00:02:09.933 common/iavf: not in enabled drivers build config 00:02:09.933 common/idpf: not in enabled drivers build config 00:02:09.933 common/ionic: not in enabled drivers build config 00:02:09.933 common/mvep: not in enabled drivers build config 00:02:09.933 common/octeontx: not in enabled drivers build config 00:02:09.933 bus/auxiliary: not in enabled drivers build config 00:02:09.933 bus/cdx: not in enabled drivers build config 00:02:09.933 bus/dpaa: not in enabled drivers build config 00:02:09.933 bus/fslmc: not in enabled drivers build config 00:02:09.933 bus/ifpga: not in enabled drivers build config 00:02:09.933 bus/platform: not in enabled drivers build config 00:02:09.933 bus/uacce: not in enabled drivers build config 00:02:09.933 bus/vmbus: not in enabled drivers build config 00:02:09.933 common/cnxk: not in enabled drivers build config 00:02:09.933 common/mlx5: not in enabled drivers build config 00:02:09.933 common/nfp: not in enabled drivers build config 00:02:09.933 common/nitrox: not in enabled drivers build config 00:02:09.934 common/qat: not in enabled drivers build config 00:02:09.934 common/sfc_efx: not in enabled drivers build config 00:02:09.934 mempool/bucket: not in enabled drivers build config 00:02:09.934 mempool/cnxk: not in enabled drivers build config 00:02:09.934 mempool/dpaa: not in enabled drivers build config 00:02:09.934 mempool/dpaa2: not in enabled drivers build config 00:02:09.934 mempool/octeontx: not in enabled drivers build config 00:02:09.934 mempool/stack: not in enabled drivers build config 00:02:09.934 dma/cnxk: not in enabled drivers build config 00:02:09.934 dma/dpaa: not in enabled drivers build config 00:02:09.934 dma/dpaa2: not in enabled drivers build config 00:02:09.934 dma/hisilicon: not in enabled drivers build config 00:02:09.934 dma/idxd: not in enabled drivers build config 00:02:09.934 dma/ioat: not in enabled drivers build config 00:02:09.934 dma/skeleton: not in enabled drivers build config 00:02:09.934 net/af_packet: not in enabled drivers build config 00:02:09.934 net/af_xdp: not in enabled drivers build config 00:02:09.934 net/ark: not in enabled drivers build config 00:02:09.934 net/atlantic: not in enabled drivers build config 00:02:09.934 net/avp: not in enabled drivers build config 00:02:09.934 net/axgbe: not in enabled drivers build config 00:02:09.934 net/bnx2x: not in enabled drivers build config 00:02:09.934 net/bnxt: not in enabled drivers build config 00:02:09.934 net/bonding: not in enabled drivers build config 00:02:09.934 net/cnxk: not in enabled drivers build config 00:02:09.934 net/cpfl: not in enabled drivers build config 00:02:09.934 net/cxgbe: not in enabled drivers build config 00:02:09.934 net/dpaa: not in enabled drivers build config 00:02:09.934 net/dpaa2: not in enabled drivers build config 00:02:09.934 net/e1000: not in enabled drivers build config 00:02:09.934 net/ena: not in enabled drivers build config 00:02:09.934 net/enetc: not in enabled drivers build config 00:02:09.934 net/enetfec: not in enabled drivers build config 00:02:09.934 net/enic: not in enabled drivers build config 00:02:09.934 net/failsafe: not in enabled drivers build config 00:02:09.934 net/fm10k: not in enabled drivers build config 00:02:09.934 net/gve: not in enabled drivers build config 00:02:09.934 net/hinic: not in enabled drivers build config 00:02:09.934 net/hns3: not in enabled drivers build config 00:02:09.934 net/i40e: not in enabled drivers build config 00:02:09.934 net/iavf: not in enabled drivers build config 00:02:09.934 net/ice: not in enabled drivers build config 00:02:09.934 net/idpf: not in enabled drivers build config 00:02:09.934 net/igc: not in enabled drivers build config 00:02:09.934 net/ionic: not in enabled drivers build config 00:02:09.934 net/ipn3ke: not in enabled drivers build config 00:02:09.934 net/ixgbe: not in enabled drivers build config 00:02:09.934 net/mana: not in enabled drivers build config 00:02:09.934 net/memif: not in enabled drivers build config 00:02:09.934 net/mlx4: not in enabled drivers build config 00:02:09.934 net/mlx5: not in enabled drivers build config 00:02:09.934 net/mvneta: not in enabled drivers build config 00:02:09.934 net/mvpp2: not in enabled drivers build config 00:02:09.934 net/netvsc: not in enabled drivers build config 00:02:09.934 net/nfb: not in enabled drivers build config 00:02:09.934 net/nfp: not in enabled drivers build config 00:02:09.934 net/ngbe: not in enabled drivers build config 00:02:09.934 net/null: not in enabled drivers build config 00:02:09.934 net/octeontx: not in enabled drivers build config 00:02:09.934 net/octeon_ep: not in enabled drivers build config 00:02:09.934 net/pcap: not in enabled drivers build config 00:02:09.934 net/pfe: not in enabled drivers build config 00:02:09.934 net/qede: not in enabled drivers build config 00:02:09.934 net/ring: not in enabled drivers build config 00:02:09.934 net/sfc: not in enabled drivers build config 00:02:09.934 net/softnic: not in enabled drivers build config 00:02:09.934 net/tap: not in enabled drivers build config 00:02:09.934 net/thunderx: not in enabled drivers build config 00:02:09.934 net/txgbe: not in enabled drivers build config 00:02:09.934 net/vdev_netvsc: not in enabled drivers build config 00:02:09.934 net/vhost: not in enabled drivers build config 00:02:09.934 net/virtio: not in enabled drivers build config 00:02:09.934 net/vmxnet3: not in enabled drivers build config 00:02:09.934 raw/*: missing internal dependency, "rawdev" 00:02:09.934 crypto/armv8: not in enabled drivers build config 00:02:09.934 crypto/bcmfs: not in enabled drivers build config 00:02:09.934 crypto/caam_jr: not in enabled drivers build config 00:02:09.934 crypto/ccp: not in enabled drivers build config 00:02:09.934 crypto/cnxk: not in enabled drivers build config 00:02:09.934 crypto/dpaa_sec: not in enabled drivers build config 00:02:09.934 crypto/dpaa2_sec: not in enabled drivers build config 00:02:09.934 crypto/ipsec_mb: not in enabled drivers build config 00:02:09.934 crypto/mlx5: not in enabled drivers build config 00:02:09.934 crypto/mvsam: not in enabled drivers build config 00:02:09.934 crypto/nitrox: not in enabled drivers build config 00:02:09.934 crypto/null: not in enabled drivers build config 00:02:09.934 crypto/octeontx: not in enabled drivers build config 00:02:09.934 crypto/openssl: not in enabled drivers build config 00:02:09.934 crypto/scheduler: not in enabled drivers build config 00:02:09.934 crypto/uadk: not in enabled drivers build config 00:02:09.934 crypto/virtio: not in enabled drivers build config 00:02:09.934 compress/isal: not in enabled drivers build config 00:02:09.934 compress/mlx5: not in enabled drivers build config 00:02:09.934 compress/nitrox: not in enabled drivers build config 00:02:09.934 compress/octeontx: not in enabled drivers build config 00:02:09.934 compress/zlib: not in enabled drivers build config 00:02:09.934 regex/*: missing internal dependency, "regexdev" 00:02:09.934 ml/*: missing internal dependency, "mldev" 00:02:09.934 vdpa/ifc: not in enabled drivers build config 00:02:09.934 vdpa/mlx5: not in enabled drivers build config 00:02:09.934 vdpa/nfp: not in enabled drivers build config 00:02:09.934 vdpa/sfc: not in enabled drivers build config 00:02:09.934 event/*: missing internal dependency, "eventdev" 00:02:09.934 baseband/*: missing internal dependency, "bbdev" 00:02:09.934 gpu/*: missing internal dependency, "gpudev" 00:02:09.934 00:02:09.934 00:02:09.934 Build targets in project: 85 00:02:09.934 00:02:09.934 DPDK 24.03.0 00:02:09.934 00:02:09.934 User defined options 00:02:09.934 buildtype : debug 00:02:09.934 default_library : shared 00:02:09.934 libdir : lib 00:02:09.934 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:09.934 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:09.934 c_link_args : 00:02:09.934 cpu_instruction_set: native 00:02:09.934 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:09.934 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:09.934 enable_docs : false 00:02:09.934 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:09.934 enable_kmods : false 00:02:09.934 max_lcores : 128 00:02:09.934 tests : false 00:02:09.934 00:02:09.934 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:09.934 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:09.934 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:09.934 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:09.934 [3/268] Linking static target lib/librte_kvargs.a 00:02:09.934 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:09.934 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:09.934 [6/268] Linking static target lib/librte_log.a 00:02:09.934 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.934 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:09.934 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:09.934 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:09.934 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:09.934 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:10.191 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:10.191 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:10.191 [15/268] Linking static target lib/librte_telemetry.a 00:02:10.191 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:10.191 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:10.191 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:10.191 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.448 [20/268] Linking target lib/librte_log.so.24.1 00:02:10.706 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:10.706 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:10.706 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:10.706 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:10.966 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:10.966 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:10.966 [27/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:10.966 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:10.966 [29/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.966 [30/268] Linking target lib/librte_telemetry.so.24.1 00:02:10.966 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:11.224 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:11.224 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:11.224 [34/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:11.224 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:11.224 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:11.483 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:11.483 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:11.740 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:11.998 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:11.998 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:11.998 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:11.998 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:11.998 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:11.998 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:11.998 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:12.256 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:12.256 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:12.256 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:12.514 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:12.514 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:12.514 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:12.772 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:12.772 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:13.030 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:13.030 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:13.030 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:13.287 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:13.287 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:13.287 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:13.287 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:13.287 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:13.287 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:13.545 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:13.801 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:13.801 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:13.801 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:14.420 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:14.421 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:14.421 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:14.421 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:14.421 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:14.421 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:14.421 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:14.421 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:14.678 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:14.678 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:14.678 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:14.678 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:14.936 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:15.195 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:15.195 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:15.454 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:15.454 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:15.454 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:15.454 [86/268] Linking static target lib/librte_ring.a 00:02:15.454 [87/268] Linking static target lib/librte_eal.a 00:02:15.712 [88/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:15.712 [89/268] Linking static target lib/librte_rcu.a 00:02:15.712 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:15.712 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:15.712 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:15.712 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:15.712 [94/268] Linking static target lib/librte_mempool.a 00:02:16.279 [95/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.279 [96/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.279 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:16.279 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:16.537 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:16.537 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:16.537 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:16.537 [102/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:16.796 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:16.796 [104/268] Linking static target lib/librte_mbuf.a 00:02:16.796 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:16.796 [106/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:17.055 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:17.055 [108/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.055 [109/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:17.055 [110/268] Linking static target lib/librte_net.a 00:02:17.055 [111/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:17.055 [112/268] Linking static target lib/librte_meter.a 00:02:17.313 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:17.572 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:17.572 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.572 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.572 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:17.572 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:17.572 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.139 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:18.139 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:18.139 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:18.397 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:18.397 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:18.397 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:18.397 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:18.397 [127/268] Linking static target lib/librte_pci.a 00:02:18.655 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:18.655 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:18.655 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:18.655 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:18.655 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:18.912 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:18.912 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:18.912 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:18.912 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:18.912 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:18.912 [138/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.912 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:18.912 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:18.912 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:18.913 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:18.913 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:19.171 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:19.171 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:19.171 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:19.171 [147/268] Linking static target lib/librte_ethdev.a 00:02:19.430 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:19.430 [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:19.430 [150/268] Linking static target lib/librte_cmdline.a 00:02:19.689 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:19.689 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:19.689 [153/268] Linking static target lib/librte_timer.a 00:02:19.689 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:19.947 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:19.947 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:19.947 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:19.947 [158/268] Linking static target lib/librte_hash.a 00:02:20.206 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:20.206 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:20.206 [161/268] Linking static target lib/librte_compressdev.a 00:02:20.465 [162/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.465 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:20.465 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:20.465 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:21.031 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:21.031 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:21.031 [168/268] Linking static target lib/librte_dmadev.a 00:02:21.031 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:21.031 [170/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:21.031 [171/268] Linking static target lib/librte_cryptodev.a 00:02:21.031 [172/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.031 [173/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:21.289 [174/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:21.289 [175/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:21.289 [176/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.289 [177/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.547 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:21.805 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:21.805 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:21.805 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:21.805 [182/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.805 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:21.805 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:22.062 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:22.062 [186/268] Linking static target lib/librte_power.a 00:02:22.628 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:22.628 [188/268] Linking static target lib/librte_reorder.a 00:02:22.628 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:22.628 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:22.628 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:22.628 [192/268] Linking static target lib/librte_security.a 00:02:22.886 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:22.886 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:23.144 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.402 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.402 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:23.402 [198/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.661 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:23.661 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:23.661 [201/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.919 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:23.919 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:23.919 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:24.176 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:24.176 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:24.176 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:24.176 [208/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:24.435 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:24.435 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:24.435 [211/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:24.435 [212/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:24.435 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:24.693 [214/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:24.693 [215/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:24.693 [216/268] Linking static target drivers/librte_bus_pci.a 00:02:24.693 [217/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:24.693 [218/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:24.693 [219/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:24.693 [220/268] Linking static target drivers/librte_bus_vdev.a 00:02:24.693 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:24.693 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:24.951 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.951 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:24.951 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:24.951 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:24.951 [227/268] Linking static target drivers/librte_mempool_ring.a 00:02:25.276 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.843 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:25.843 [230/268] Linking static target lib/librte_vhost.a 00:02:26.778 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.778 [232/268] Linking target lib/librte_eal.so.24.1 00:02:27.036 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:27.036 [234/268] Linking target lib/librte_ring.so.24.1 00:02:27.036 [235/268] Linking target lib/librte_meter.so.24.1 00:02:27.036 [236/268] Linking target lib/librte_pci.so.24.1 00:02:27.036 [237/268] Linking target lib/librte_timer.so.24.1 00:02:27.036 [238/268] Linking target lib/librte_dmadev.so.24.1 00:02:27.036 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:27.295 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:27.295 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:27.295 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:27.295 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:27.295 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:27.295 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:27.295 [246/268] Linking target lib/librte_rcu.so.24.1 00:02:27.295 [247/268] Linking target lib/librte_mempool.so.24.1 00:02:27.295 [248/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.295 [249/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.295 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:27.295 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:27.554 [252/268] Linking target lib/librte_mbuf.so.24.1 00:02:27.554 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:27.554 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:27.812 [255/268] Linking target lib/librte_reorder.so.24.1 00:02:27.812 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:27.812 [257/268] Linking target lib/librte_net.so.24.1 00:02:27.812 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:27.812 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:27.812 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:27.812 [261/268] Linking target lib/librte_hash.so.24.1 00:02:27.812 [262/268] Linking target lib/librte_security.so.24.1 00:02:27.812 [263/268] Linking target lib/librte_cmdline.so.24.1 00:02:27.812 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:28.070 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:28.070 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:28.070 [267/268] Linking target lib/librte_power.so.24.1 00:02:28.070 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:28.070 INFO: autodetecting backend as ninja 00:02:28.070 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:29.446 CC lib/log/log.o 00:02:29.446 CC lib/log/log_flags.o 00:02:29.446 CC lib/log/log_deprecated.o 00:02:29.446 CC lib/ut_mock/mock.o 00:02:29.446 CC lib/ut/ut.o 00:02:29.446 LIB libspdk_ut.a 00:02:29.446 LIB libspdk_log.a 00:02:29.446 LIB libspdk_ut_mock.a 00:02:29.446 SO libspdk_ut.so.2.0 00:02:29.446 SO libspdk_log.so.7.0 00:02:29.705 SO libspdk_ut_mock.so.6.0 00:02:29.705 SYMLINK libspdk_ut.so 00:02:29.705 SYMLINK libspdk_log.so 00:02:29.705 SYMLINK libspdk_ut_mock.so 00:02:29.963 CC lib/dma/dma.o 00:02:29.963 CC lib/ioat/ioat.o 00:02:29.963 CC lib/util/base64.o 00:02:29.963 CC lib/util/cpuset.o 00:02:29.963 CC lib/util/bit_array.o 00:02:29.963 CC lib/util/crc16.o 00:02:29.963 CC lib/util/crc32.o 00:02:29.963 CC lib/util/crc32c.o 00:02:29.963 CXX lib/trace_parser/trace.o 00:02:29.963 CC lib/vfio_user/host/vfio_user_pci.o 00:02:29.963 CC lib/vfio_user/host/vfio_user.o 00:02:29.963 CC lib/util/crc32_ieee.o 00:02:29.963 CC lib/util/crc64.o 00:02:29.963 CC lib/util/dif.o 00:02:29.963 LIB libspdk_dma.a 00:02:29.963 CC lib/util/fd.o 00:02:30.221 SO libspdk_dma.so.4.0 00:02:30.221 CC lib/util/fd_group.o 00:02:30.221 CC lib/util/file.o 00:02:30.221 CC lib/util/hexlify.o 00:02:30.221 LIB libspdk_ioat.a 00:02:30.221 SYMLINK libspdk_dma.so 00:02:30.222 CC lib/util/iov.o 00:02:30.222 SO libspdk_ioat.so.7.0 00:02:30.222 CC lib/util/math.o 00:02:30.222 SYMLINK libspdk_ioat.so 00:02:30.222 CC lib/util/net.o 00:02:30.222 LIB libspdk_vfio_user.a 00:02:30.222 CC lib/util/pipe.o 00:02:30.222 CC lib/util/strerror_tls.o 00:02:30.222 SO libspdk_vfio_user.so.5.0 00:02:30.222 CC lib/util/string.o 00:02:30.479 CC lib/util/uuid.o 00:02:30.479 SYMLINK libspdk_vfio_user.so 00:02:30.479 CC lib/util/xor.o 00:02:30.479 CC lib/util/zipf.o 00:02:30.479 LIB libspdk_util.a 00:02:30.737 SO libspdk_util.so.10.0 00:02:30.995 SYMLINK libspdk_util.so 00:02:30.995 LIB libspdk_trace_parser.a 00:02:30.995 SO libspdk_trace_parser.so.5.0 00:02:30.995 SYMLINK libspdk_trace_parser.so 00:02:30.995 CC lib/json/json_parse.o 00:02:30.995 CC lib/json/json_util.o 00:02:30.995 CC lib/env_dpdk/env.o 00:02:30.995 CC lib/json/json_write.o 00:02:30.995 CC lib/conf/conf.o 00:02:30.995 CC lib/env_dpdk/memory.o 00:02:30.995 CC lib/rdma_utils/rdma_utils.o 00:02:30.995 CC lib/rdma_provider/common.o 00:02:30.995 CC lib/vmd/vmd.o 00:02:30.995 CC lib/idxd/idxd.o 00:02:31.252 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:31.252 CC lib/vmd/led.o 00:02:31.252 LIB libspdk_conf.a 00:02:31.252 SO libspdk_conf.so.6.0 00:02:31.252 LIB libspdk_rdma_utils.a 00:02:31.252 SO libspdk_rdma_utils.so.1.0 00:02:31.252 LIB libspdk_json.a 00:02:31.509 SYMLINK libspdk_conf.so 00:02:31.509 CC lib/idxd/idxd_user.o 00:02:31.509 CC lib/idxd/idxd_kernel.o 00:02:31.509 SO libspdk_json.so.6.0 00:02:31.509 SYMLINK libspdk_rdma_utils.so 00:02:31.509 CC lib/env_dpdk/pci.o 00:02:31.509 LIB libspdk_rdma_provider.a 00:02:31.509 CC lib/env_dpdk/init.o 00:02:31.509 SO libspdk_rdma_provider.so.6.0 00:02:31.509 SYMLINK libspdk_json.so 00:02:31.509 CC lib/env_dpdk/threads.o 00:02:31.509 SYMLINK libspdk_rdma_provider.so 00:02:31.509 CC lib/env_dpdk/pci_ioat.o 00:02:31.509 CC lib/env_dpdk/pci_virtio.o 00:02:31.767 CC lib/env_dpdk/pci_vmd.o 00:02:31.767 LIB libspdk_idxd.a 00:02:31.767 CC lib/env_dpdk/pci_idxd.o 00:02:31.767 SO libspdk_idxd.so.12.0 00:02:31.767 CC lib/jsonrpc/jsonrpc_server.o 00:02:31.767 CC lib/env_dpdk/pci_event.o 00:02:31.767 CC lib/env_dpdk/sigbus_handler.o 00:02:31.767 SYMLINK libspdk_idxd.so 00:02:31.767 CC lib/env_dpdk/pci_dpdk.o 00:02:31.767 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:31.767 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:31.767 LIB libspdk_vmd.a 00:02:31.767 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:31.767 CC lib/jsonrpc/jsonrpc_client.o 00:02:31.767 SO libspdk_vmd.so.6.0 00:02:32.025 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:32.025 SYMLINK libspdk_vmd.so 00:02:32.025 LIB libspdk_jsonrpc.a 00:02:32.284 SO libspdk_jsonrpc.so.6.0 00:02:32.284 SYMLINK libspdk_jsonrpc.so 00:02:32.542 LIB libspdk_env_dpdk.a 00:02:32.542 CC lib/rpc/rpc.o 00:02:32.812 SO libspdk_env_dpdk.so.15.0 00:02:32.812 LIB libspdk_rpc.a 00:02:32.812 SO libspdk_rpc.so.6.0 00:02:32.812 SYMLINK libspdk_rpc.so 00:02:32.812 SYMLINK libspdk_env_dpdk.so 00:02:33.070 CC lib/trace/trace.o 00:02:33.070 CC lib/notify/notify.o 00:02:33.070 CC lib/notify/notify_rpc.o 00:02:33.070 CC lib/trace/trace_rpc.o 00:02:33.070 CC lib/trace/trace_flags.o 00:02:33.070 CC lib/keyring/keyring.o 00:02:33.070 CC lib/keyring/keyring_rpc.o 00:02:33.328 LIB libspdk_notify.a 00:02:33.328 SO libspdk_notify.so.6.0 00:02:33.328 LIB libspdk_trace.a 00:02:33.328 SYMLINK libspdk_notify.so 00:02:33.328 LIB libspdk_keyring.a 00:02:33.328 SO libspdk_trace.so.10.0 00:02:33.328 SO libspdk_keyring.so.1.0 00:02:33.328 SYMLINK libspdk_trace.so 00:02:33.587 SYMLINK libspdk_keyring.so 00:02:33.587 CC lib/thread/iobuf.o 00:02:33.587 CC lib/thread/thread.o 00:02:33.587 CC lib/sock/sock.o 00:02:33.587 CC lib/sock/sock_rpc.o 00:02:34.152 LIB libspdk_sock.a 00:02:34.152 SO libspdk_sock.so.10.0 00:02:34.152 SYMLINK libspdk_sock.so 00:02:34.411 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:34.411 CC lib/nvme/nvme_ctrlr.o 00:02:34.411 CC lib/nvme/nvme_ns_cmd.o 00:02:34.411 CC lib/nvme/nvme_fabric.o 00:02:34.411 CC lib/nvme/nvme_ns.o 00:02:34.411 CC lib/nvme/nvme_qpair.o 00:02:34.411 CC lib/nvme/nvme_pcie_common.o 00:02:34.411 CC lib/nvme/nvme_pcie.o 00:02:34.411 CC lib/nvme/nvme.o 00:02:35.344 CC lib/nvme/nvme_quirks.o 00:02:35.344 CC lib/nvme/nvme_transport.o 00:02:35.344 LIB libspdk_thread.a 00:02:35.344 SO libspdk_thread.so.10.1 00:02:35.344 CC lib/nvme/nvme_discovery.o 00:02:35.344 SYMLINK libspdk_thread.so 00:02:35.344 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:35.603 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:35.603 CC lib/nvme/nvme_tcp.o 00:02:35.603 CC lib/nvme/nvme_opal.o 00:02:35.603 CC lib/accel/accel.o 00:02:35.861 CC lib/blob/blobstore.o 00:02:35.861 CC lib/nvme/nvme_io_msg.o 00:02:36.119 CC lib/nvme/nvme_poll_group.o 00:02:36.119 CC lib/nvme/nvme_zns.o 00:02:36.119 CC lib/accel/accel_rpc.o 00:02:36.119 CC lib/accel/accel_sw.o 00:02:36.119 CC lib/nvme/nvme_stubs.o 00:02:36.376 CC lib/nvme/nvme_auth.o 00:02:36.635 CC lib/init/json_config.o 00:02:36.635 LIB libspdk_accel.a 00:02:36.635 SO libspdk_accel.so.16.0 00:02:36.635 CC lib/virtio/virtio.o 00:02:36.635 CC lib/virtio/virtio_vhost_user.o 00:02:36.635 CC lib/virtio/virtio_vfio_user.o 00:02:36.635 CC lib/virtio/virtio_pci.o 00:02:36.635 SYMLINK libspdk_accel.so 00:02:36.635 CC lib/nvme/nvme_cuse.o 00:02:36.892 CC lib/init/subsystem.o 00:02:36.893 CC lib/blob/request.o 00:02:36.893 CC lib/bdev/bdev.o 00:02:36.893 CC lib/init/subsystem_rpc.o 00:02:36.893 CC lib/bdev/bdev_rpc.o 00:02:36.893 CC lib/bdev/bdev_zone.o 00:02:36.893 LIB libspdk_virtio.a 00:02:37.150 SO libspdk_virtio.so.7.0 00:02:37.150 CC lib/nvme/nvme_rdma.o 00:02:37.150 CC lib/init/rpc.o 00:02:37.150 SYMLINK libspdk_virtio.so 00:02:37.150 CC lib/bdev/part.o 00:02:37.150 CC lib/bdev/scsi_nvme.o 00:02:37.150 CC lib/blob/zeroes.o 00:02:37.150 CC lib/blob/blob_bs_dev.o 00:02:37.409 LIB libspdk_init.a 00:02:37.409 SO libspdk_init.so.5.0 00:02:37.409 SYMLINK libspdk_init.so 00:02:37.667 CC lib/event/reactor.o 00:02:37.667 CC lib/event/app.o 00:02:37.667 CC lib/event/log_rpc.o 00:02:37.667 CC lib/event/app_rpc.o 00:02:37.667 CC lib/event/scheduler_static.o 00:02:38.233 LIB libspdk_event.a 00:02:38.233 SO libspdk_event.so.14.0 00:02:38.233 SYMLINK libspdk_event.so 00:02:38.491 LIB libspdk_nvme.a 00:02:38.751 SO libspdk_nvme.so.13.1 00:02:38.751 LIB libspdk_blob.a 00:02:38.751 SO libspdk_blob.so.11.0 00:02:39.009 SYMLINK libspdk_blob.so 00:02:39.009 SYMLINK libspdk_nvme.so 00:02:39.267 CC lib/blobfs/blobfs.o 00:02:39.267 CC lib/blobfs/tree.o 00:02:39.267 CC lib/lvol/lvol.o 00:02:39.833 LIB libspdk_bdev.a 00:02:39.833 SO libspdk_bdev.so.16.0 00:02:39.833 SYMLINK libspdk_bdev.so 00:02:40.091 LIB libspdk_blobfs.a 00:02:40.091 CC lib/nbd/nbd.o 00:02:40.091 CC lib/nbd/nbd_rpc.o 00:02:40.091 CC lib/nvmf/ctrlr.o 00:02:40.091 CC lib/nvmf/ctrlr_discovery.o 00:02:40.091 CC lib/scsi/dev.o 00:02:40.091 CC lib/nvmf/ctrlr_bdev.o 00:02:40.091 CC lib/ftl/ftl_core.o 00:02:40.091 CC lib/ublk/ublk.o 00:02:40.091 SO libspdk_blobfs.so.10.0 00:02:40.091 LIB libspdk_lvol.a 00:02:40.355 SO libspdk_lvol.so.10.0 00:02:40.355 SYMLINK libspdk_blobfs.so 00:02:40.355 CC lib/ublk/ublk_rpc.o 00:02:40.355 CC lib/nvmf/subsystem.o 00:02:40.355 SYMLINK libspdk_lvol.so 00:02:40.355 CC lib/nvmf/nvmf.o 00:02:40.355 CC lib/scsi/lun.o 00:02:40.355 CC lib/ftl/ftl_init.o 00:02:40.614 CC lib/ftl/ftl_layout.o 00:02:40.614 LIB libspdk_nbd.a 00:02:40.614 SO libspdk_nbd.so.7.0 00:02:40.614 CC lib/ftl/ftl_debug.o 00:02:40.614 SYMLINK libspdk_nbd.so 00:02:40.614 CC lib/scsi/port.o 00:02:40.614 CC lib/ftl/ftl_io.o 00:02:40.614 CC lib/scsi/scsi.o 00:02:40.871 CC lib/nvmf/nvmf_rpc.o 00:02:40.871 LIB libspdk_ublk.a 00:02:40.871 CC lib/scsi/scsi_bdev.o 00:02:40.871 SO libspdk_ublk.so.3.0 00:02:40.871 CC lib/scsi/scsi_pr.o 00:02:40.871 CC lib/nvmf/transport.o 00:02:40.872 CC lib/ftl/ftl_sb.o 00:02:40.872 CC lib/ftl/ftl_l2p.o 00:02:40.872 SYMLINK libspdk_ublk.so 00:02:40.872 CC lib/ftl/ftl_l2p_flat.o 00:02:41.130 CC lib/ftl/ftl_nv_cache.o 00:02:41.130 CC lib/ftl/ftl_band.o 00:02:41.130 CC lib/ftl/ftl_band_ops.o 00:02:41.130 CC lib/nvmf/tcp.o 00:02:41.388 CC lib/nvmf/stubs.o 00:02:41.388 CC lib/scsi/scsi_rpc.o 00:02:41.388 CC lib/scsi/task.o 00:02:41.646 CC lib/nvmf/mdns_server.o 00:02:41.646 CC lib/nvmf/rdma.o 00:02:41.646 CC lib/nvmf/auth.o 00:02:41.646 CC lib/ftl/ftl_writer.o 00:02:41.646 CC lib/ftl/ftl_rq.o 00:02:41.646 LIB libspdk_scsi.a 00:02:41.646 CC lib/ftl/ftl_reloc.o 00:02:41.646 SO libspdk_scsi.so.9.0 00:02:41.903 CC lib/ftl/ftl_l2p_cache.o 00:02:41.903 SYMLINK libspdk_scsi.so 00:02:41.903 CC lib/ftl/ftl_p2l.o 00:02:41.903 CC lib/ftl/mngt/ftl_mngt.o 00:02:41.903 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:41.903 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:42.161 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:42.161 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:42.161 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:42.161 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:42.419 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:42.419 CC lib/iscsi/conn.o 00:02:42.419 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:42.419 CC lib/vhost/vhost.o 00:02:42.419 CC lib/vhost/vhost_rpc.o 00:02:42.419 CC lib/vhost/vhost_scsi.o 00:02:42.419 CC lib/vhost/vhost_blk.o 00:02:42.419 CC lib/vhost/rte_vhost_user.o 00:02:42.419 CC lib/iscsi/init_grp.o 00:02:42.677 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:42.677 CC lib/iscsi/iscsi.o 00:02:42.677 CC lib/iscsi/md5.o 00:02:42.934 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:42.934 CC lib/iscsi/param.o 00:02:42.934 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:43.191 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:43.191 CC lib/iscsi/portal_grp.o 00:02:43.191 CC lib/iscsi/tgt_node.o 00:02:43.191 CC lib/iscsi/iscsi_subsystem.o 00:02:43.191 CC lib/iscsi/iscsi_rpc.o 00:02:43.449 CC lib/ftl/utils/ftl_conf.o 00:02:43.449 CC lib/ftl/utils/ftl_md.o 00:02:43.449 CC lib/ftl/utils/ftl_mempool.o 00:02:43.449 CC lib/ftl/utils/ftl_bitmap.o 00:02:43.449 LIB libspdk_nvmf.a 00:02:43.449 LIB libspdk_vhost.a 00:02:43.449 CC lib/iscsi/task.o 00:02:43.706 CC lib/ftl/utils/ftl_property.o 00:02:43.706 SO libspdk_vhost.so.8.0 00:02:43.706 SO libspdk_nvmf.so.19.0 00:02:43.706 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:43.706 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:43.706 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:43.706 SYMLINK libspdk_vhost.so 00:02:43.706 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:43.706 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:43.706 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:43.706 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:43.971 SYMLINK libspdk_nvmf.so 00:02:43.971 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:43.971 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:43.971 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:43.971 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:43.971 CC lib/ftl/base/ftl_base_dev.o 00:02:43.971 CC lib/ftl/base/ftl_base_bdev.o 00:02:43.971 CC lib/ftl/ftl_trace.o 00:02:44.244 LIB libspdk_iscsi.a 00:02:44.244 SO libspdk_iscsi.so.8.0 00:02:44.244 LIB libspdk_ftl.a 00:02:44.501 SYMLINK libspdk_iscsi.so 00:02:44.501 SO libspdk_ftl.so.9.0 00:02:44.759 SYMLINK libspdk_ftl.so 00:02:45.322 CC module/env_dpdk/env_dpdk_rpc.o 00:02:45.322 CC module/accel/dsa/accel_dsa.o 00:02:45.322 CC module/accel/ioat/accel_ioat.o 00:02:45.322 CC module/blob/bdev/blob_bdev.o 00:02:45.322 CC module/sock/uring/uring.o 00:02:45.322 CC module/accel/error/accel_error.o 00:02:45.322 CC module/accel/iaa/accel_iaa.o 00:02:45.322 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:45.322 CC module/keyring/file/keyring.o 00:02:45.322 CC module/sock/posix/posix.o 00:02:45.322 LIB libspdk_env_dpdk_rpc.a 00:02:45.322 SO libspdk_env_dpdk_rpc.so.6.0 00:02:45.322 SYMLINK libspdk_env_dpdk_rpc.so 00:02:45.322 CC module/accel/error/accel_error_rpc.o 00:02:45.322 CC module/keyring/file/keyring_rpc.o 00:02:45.580 CC module/accel/iaa/accel_iaa_rpc.o 00:02:45.580 LIB libspdk_scheduler_dynamic.a 00:02:45.580 CC module/accel/ioat/accel_ioat_rpc.o 00:02:45.580 CC module/accel/dsa/accel_dsa_rpc.o 00:02:45.580 SO libspdk_scheduler_dynamic.so.4.0 00:02:45.580 LIB libspdk_blob_bdev.a 00:02:45.580 LIB libspdk_accel_error.a 00:02:45.580 SO libspdk_blob_bdev.so.11.0 00:02:45.580 LIB libspdk_keyring_file.a 00:02:45.580 SO libspdk_accel_error.so.2.0 00:02:45.580 SYMLINK libspdk_scheduler_dynamic.so 00:02:45.580 SO libspdk_keyring_file.so.1.0 00:02:45.580 LIB libspdk_accel_iaa.a 00:02:45.580 SYMLINK libspdk_blob_bdev.so 00:02:45.580 LIB libspdk_accel_ioat.a 00:02:45.580 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:45.580 SYMLINK libspdk_accel_error.so 00:02:45.580 SO libspdk_accel_iaa.so.3.0 00:02:45.580 LIB libspdk_accel_dsa.a 00:02:45.580 SO libspdk_accel_ioat.so.6.0 00:02:45.580 SYMLINK libspdk_keyring_file.so 00:02:45.580 SO libspdk_accel_dsa.so.5.0 00:02:45.838 SYMLINK libspdk_accel_ioat.so 00:02:45.838 SYMLINK libspdk_accel_iaa.so 00:02:45.838 SYMLINK libspdk_accel_dsa.so 00:02:45.838 CC module/scheduler/gscheduler/gscheduler.o 00:02:45.838 LIB libspdk_scheduler_dpdk_governor.a 00:02:45.838 CC module/keyring/linux/keyring.o 00:02:45.838 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:45.838 LIB libspdk_scheduler_gscheduler.a 00:02:45.838 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:45.838 CC module/keyring/linux/keyring_rpc.o 00:02:45.838 CC module/bdev/delay/vbdev_delay.o 00:02:45.838 CC module/bdev/lvol/vbdev_lvol.o 00:02:46.096 CC module/bdev/error/vbdev_error.o 00:02:46.096 LIB libspdk_sock_uring.a 00:02:46.096 CC module/bdev/gpt/gpt.o 00:02:46.096 SO libspdk_scheduler_gscheduler.so.4.0 00:02:46.096 CC module/blobfs/bdev/blobfs_bdev.o 00:02:46.096 SO libspdk_sock_uring.so.5.0 00:02:46.096 LIB libspdk_sock_posix.a 00:02:46.096 SYMLINK libspdk_scheduler_gscheduler.so 00:02:46.096 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:46.096 SYMLINK libspdk_sock_uring.so 00:02:46.096 SO libspdk_sock_posix.so.6.0 00:02:46.096 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:46.096 LIB libspdk_keyring_linux.a 00:02:46.096 SO libspdk_keyring_linux.so.1.0 00:02:46.096 SYMLINK libspdk_sock_posix.so 00:02:46.096 CC module/bdev/error/vbdev_error_rpc.o 00:02:46.096 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:46.096 CC module/bdev/gpt/vbdev_gpt.o 00:02:46.096 CC module/bdev/malloc/bdev_malloc.o 00:02:46.096 SYMLINK libspdk_keyring_linux.so 00:02:46.096 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:46.354 LIB libspdk_blobfs_bdev.a 00:02:46.354 SO libspdk_blobfs_bdev.so.6.0 00:02:46.354 LIB libspdk_bdev_error.a 00:02:46.354 SYMLINK libspdk_blobfs_bdev.so 00:02:46.354 LIB libspdk_bdev_delay.a 00:02:46.354 SO libspdk_bdev_error.so.6.0 00:02:46.354 SO libspdk_bdev_delay.so.6.0 00:02:46.354 CC module/bdev/null/bdev_null.o 00:02:46.354 LIB libspdk_bdev_gpt.a 00:02:46.354 CC module/bdev/nvme/bdev_nvme.o 00:02:46.612 SO libspdk_bdev_gpt.so.6.0 00:02:46.612 SYMLINK libspdk_bdev_delay.so 00:02:46.612 SYMLINK libspdk_bdev_error.so 00:02:46.612 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:46.612 CC module/bdev/nvme/nvme_rpc.o 00:02:46.612 CC module/bdev/nvme/bdev_mdns_client.o 00:02:46.612 LIB libspdk_bdev_lvol.a 00:02:46.612 SYMLINK libspdk_bdev_gpt.so 00:02:46.612 LIB libspdk_bdev_malloc.a 00:02:46.612 SO libspdk_bdev_lvol.so.6.0 00:02:46.612 CC module/bdev/passthru/vbdev_passthru.o 00:02:46.612 SO libspdk_bdev_malloc.so.6.0 00:02:46.612 CC module/bdev/raid/bdev_raid.o 00:02:46.612 SYMLINK libspdk_bdev_lvol.so 00:02:46.612 CC module/bdev/nvme/vbdev_opal.o 00:02:46.612 CC module/bdev/null/bdev_null_rpc.o 00:02:46.612 SYMLINK libspdk_bdev_malloc.so 00:02:46.612 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:46.612 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:46.870 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:46.870 CC module/bdev/split/vbdev_split.o 00:02:46.870 LIB libspdk_bdev_null.a 00:02:46.870 SO libspdk_bdev_null.so.6.0 00:02:46.870 CC module/bdev/split/vbdev_split_rpc.o 00:02:46.870 LIB libspdk_bdev_passthru.a 00:02:46.870 CC module/bdev/raid/bdev_raid_rpc.o 00:02:46.870 SO libspdk_bdev_passthru.so.6.0 00:02:46.870 SYMLINK libspdk_bdev_null.so 00:02:47.127 SYMLINK libspdk_bdev_passthru.so 00:02:47.127 CC module/bdev/raid/bdev_raid_sb.o 00:02:47.127 LIB libspdk_bdev_split.a 00:02:47.127 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:47.127 CC module/bdev/uring/bdev_uring.o 00:02:47.127 SO libspdk_bdev_split.so.6.0 00:02:47.127 CC module/bdev/uring/bdev_uring_rpc.o 00:02:47.127 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:47.127 CC module/bdev/aio/bdev_aio.o 00:02:47.127 SYMLINK libspdk_bdev_split.so 00:02:47.127 CC module/bdev/raid/raid0.o 00:02:47.127 CC module/bdev/ftl/bdev_ftl.o 00:02:47.385 CC module/bdev/raid/raid1.o 00:02:47.385 CC module/bdev/raid/concat.o 00:02:47.385 LIB libspdk_bdev_zone_block.a 00:02:47.385 SO libspdk_bdev_zone_block.so.6.0 00:02:47.385 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:47.643 LIB libspdk_bdev_uring.a 00:02:47.643 CC module/bdev/aio/bdev_aio_rpc.o 00:02:47.643 CC module/bdev/iscsi/bdev_iscsi.o 00:02:47.643 SO libspdk_bdev_uring.so.6.0 00:02:47.643 SYMLINK libspdk_bdev_zone_block.so 00:02:47.643 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:47.643 LIB libspdk_bdev_raid.a 00:02:47.643 SYMLINK libspdk_bdev_uring.so 00:02:47.643 SO libspdk_bdev_raid.so.6.0 00:02:47.643 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:47.643 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:47.643 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:47.643 LIB libspdk_bdev_aio.a 00:02:47.643 SO libspdk_bdev_aio.so.6.0 00:02:47.643 SYMLINK libspdk_bdev_raid.so 00:02:47.643 LIB libspdk_bdev_ftl.a 00:02:47.900 SO libspdk_bdev_ftl.so.6.0 00:02:47.900 SYMLINK libspdk_bdev_aio.so 00:02:47.900 SYMLINK libspdk_bdev_ftl.so 00:02:47.900 LIB libspdk_bdev_iscsi.a 00:02:47.900 SO libspdk_bdev_iscsi.so.6.0 00:02:47.900 SYMLINK libspdk_bdev_iscsi.so 00:02:48.158 LIB libspdk_bdev_virtio.a 00:02:48.158 SO libspdk_bdev_virtio.so.6.0 00:02:48.415 SYMLINK libspdk_bdev_virtio.so 00:02:48.673 LIB libspdk_bdev_nvme.a 00:02:48.930 SO libspdk_bdev_nvme.so.7.0 00:02:48.930 SYMLINK libspdk_bdev_nvme.so 00:02:49.497 CC module/event/subsystems/sock/sock.o 00:02:49.497 CC module/event/subsystems/iobuf/iobuf.o 00:02:49.497 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:49.497 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:49.497 CC module/event/subsystems/vmd/vmd.o 00:02:49.497 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:49.497 CC module/event/subsystems/keyring/keyring.o 00:02:49.497 CC module/event/subsystems/scheduler/scheduler.o 00:02:49.497 LIB libspdk_event_sock.a 00:02:49.497 LIB libspdk_event_keyring.a 00:02:49.497 LIB libspdk_event_vhost_blk.a 00:02:49.497 LIB libspdk_event_vmd.a 00:02:49.497 LIB libspdk_event_scheduler.a 00:02:49.497 SO libspdk_event_sock.so.5.0 00:02:49.497 LIB libspdk_event_iobuf.a 00:02:49.755 SO libspdk_event_keyring.so.1.0 00:02:49.755 SO libspdk_event_vhost_blk.so.3.0 00:02:49.755 SO libspdk_event_scheduler.so.4.0 00:02:49.755 SO libspdk_event_vmd.so.6.0 00:02:49.755 SYMLINK libspdk_event_sock.so 00:02:49.755 SO libspdk_event_iobuf.so.3.0 00:02:49.755 SYMLINK libspdk_event_vhost_blk.so 00:02:49.755 SYMLINK libspdk_event_keyring.so 00:02:49.755 SYMLINK libspdk_event_scheduler.so 00:02:49.755 SYMLINK libspdk_event_vmd.so 00:02:49.755 SYMLINK libspdk_event_iobuf.so 00:02:50.014 CC module/event/subsystems/accel/accel.o 00:02:50.272 LIB libspdk_event_accel.a 00:02:50.272 SO libspdk_event_accel.so.6.0 00:02:50.272 SYMLINK libspdk_event_accel.so 00:02:50.529 CC module/event/subsystems/bdev/bdev.o 00:02:50.786 LIB libspdk_event_bdev.a 00:02:50.786 SO libspdk_event_bdev.so.6.0 00:02:50.786 SYMLINK libspdk_event_bdev.so 00:02:51.049 CC module/event/subsystems/ublk/ublk.o 00:02:51.049 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:51.049 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:51.049 CC module/event/subsystems/nbd/nbd.o 00:02:51.049 CC module/event/subsystems/scsi/scsi.o 00:02:51.326 LIB libspdk_event_nbd.a 00:02:51.326 LIB libspdk_event_ublk.a 00:02:51.326 LIB libspdk_event_scsi.a 00:02:51.326 SO libspdk_event_nbd.so.6.0 00:02:51.326 SO libspdk_event_ublk.so.3.0 00:02:51.326 SO libspdk_event_scsi.so.6.0 00:02:51.326 SYMLINK libspdk_event_nbd.so 00:02:51.326 SYMLINK libspdk_event_ublk.so 00:02:51.326 LIB libspdk_event_nvmf.a 00:02:51.326 SYMLINK libspdk_event_scsi.so 00:02:51.326 SO libspdk_event_nvmf.so.6.0 00:02:51.584 SYMLINK libspdk_event_nvmf.so 00:02:51.584 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:51.584 CC module/event/subsystems/iscsi/iscsi.o 00:02:51.841 LIB libspdk_event_vhost_scsi.a 00:02:51.841 LIB libspdk_event_iscsi.a 00:02:51.841 SO libspdk_event_vhost_scsi.so.3.0 00:02:51.841 SO libspdk_event_iscsi.so.6.0 00:02:51.841 SYMLINK libspdk_event_vhost_scsi.so 00:02:51.841 SYMLINK libspdk_event_iscsi.so 00:02:52.114 SO libspdk.so.6.0 00:02:52.114 SYMLINK libspdk.so 00:02:52.372 CC app/trace_record/trace_record.o 00:02:52.372 CXX app/trace/trace.o 00:02:52.372 CC app/spdk_lspci/spdk_lspci.o 00:02:52.372 CC app/nvmf_tgt/nvmf_main.o 00:02:52.372 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:52.372 CC app/iscsi_tgt/iscsi_tgt.o 00:02:52.372 CC test/thread/poller_perf/poller_perf.o 00:02:52.372 CC examples/ioat/perf/perf.o 00:02:52.372 CC app/spdk_tgt/spdk_tgt.o 00:02:52.372 CC examples/util/zipf/zipf.o 00:02:52.630 LINK spdk_lspci 00:02:52.630 LINK interrupt_tgt 00:02:52.630 LINK nvmf_tgt 00:02:52.630 LINK poller_perf 00:02:52.630 LINK zipf 00:02:52.630 LINK spdk_trace_record 00:02:52.630 LINK ioat_perf 00:02:52.630 LINK spdk_tgt 00:02:52.630 LINK iscsi_tgt 00:02:52.887 LINK spdk_trace 00:02:52.887 CC app/spdk_nvme_perf/perf.o 00:02:52.887 CC examples/ioat/verify/verify.o 00:02:52.887 CC app/spdk_nvme_identify/identify.o 00:02:52.887 CC app/spdk_top/spdk_top.o 00:02:52.887 CC app/spdk_nvme_discover/discovery_aer.o 00:02:53.145 CC test/dma/test_dma/test_dma.o 00:02:53.145 CC examples/thread/thread/thread_ex.o 00:02:53.145 CC app/spdk_dd/spdk_dd.o 00:02:53.145 LINK verify 00:02:53.145 CC app/fio/nvme/fio_plugin.o 00:02:53.145 LINK spdk_nvme_discover 00:02:53.145 CC examples/sock/hello_world/hello_sock.o 00:02:53.402 LINK thread 00:02:53.403 LINK test_dma 00:02:53.403 CC app/vhost/vhost.o 00:02:53.403 LINK hello_sock 00:02:53.661 LINK spdk_dd 00:02:53.661 CC examples/vmd/lsvmd/lsvmd.o 00:02:53.661 CC examples/vmd/led/led.o 00:02:53.661 LINK vhost 00:02:53.661 LINK spdk_nvme_perf 00:02:53.661 LINK spdk_nvme 00:02:53.661 LINK lsvmd 00:02:53.661 CC app/fio/bdev/fio_plugin.o 00:02:53.661 LINK led 00:02:53.917 LINK spdk_nvme_identify 00:02:53.917 CC test/app/bdev_svc/bdev_svc.o 00:02:53.917 CC test/app/histogram_perf/histogram_perf.o 00:02:53.917 CC test/app/jsoncat/jsoncat.o 00:02:53.917 LINK spdk_top 00:02:53.917 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:53.917 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:53.917 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:54.174 LINK jsoncat 00:02:54.174 LINK histogram_perf 00:02:54.174 LINK bdev_svc 00:02:54.174 CC examples/idxd/perf/perf.o 00:02:54.174 CC examples/accel/perf/accel_perf.o 00:02:54.174 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:54.174 LINK spdk_bdev 00:02:54.431 CC test/app/stub/stub.o 00:02:54.431 CC examples/blob/hello_world/hello_blob.o 00:02:54.431 LINK nvme_fuzz 00:02:54.431 TEST_HEADER include/spdk/accel.h 00:02:54.431 TEST_HEADER include/spdk/accel_module.h 00:02:54.431 TEST_HEADER include/spdk/assert.h 00:02:54.431 TEST_HEADER include/spdk/barrier.h 00:02:54.431 TEST_HEADER include/spdk/base64.h 00:02:54.431 TEST_HEADER include/spdk/bdev.h 00:02:54.431 TEST_HEADER include/spdk/bdev_module.h 00:02:54.431 TEST_HEADER include/spdk/bdev_zone.h 00:02:54.431 TEST_HEADER include/spdk/bit_array.h 00:02:54.431 TEST_HEADER include/spdk/bit_pool.h 00:02:54.431 TEST_HEADER include/spdk/blob_bdev.h 00:02:54.431 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:54.431 TEST_HEADER include/spdk/blobfs.h 00:02:54.431 TEST_HEADER include/spdk/blob.h 00:02:54.431 TEST_HEADER include/spdk/conf.h 00:02:54.431 CC test/blobfs/mkfs/mkfs.o 00:02:54.431 TEST_HEADER include/spdk/config.h 00:02:54.431 TEST_HEADER include/spdk/cpuset.h 00:02:54.431 TEST_HEADER include/spdk/crc16.h 00:02:54.431 TEST_HEADER include/spdk/crc32.h 00:02:54.431 TEST_HEADER include/spdk/crc64.h 00:02:54.431 TEST_HEADER include/spdk/dif.h 00:02:54.431 TEST_HEADER include/spdk/dma.h 00:02:54.431 TEST_HEADER include/spdk/endian.h 00:02:54.431 TEST_HEADER include/spdk/env_dpdk.h 00:02:54.431 TEST_HEADER include/spdk/env.h 00:02:54.431 TEST_HEADER include/spdk/event.h 00:02:54.431 TEST_HEADER include/spdk/fd_group.h 00:02:54.431 TEST_HEADER include/spdk/fd.h 00:02:54.431 TEST_HEADER include/spdk/file.h 00:02:54.431 TEST_HEADER include/spdk/ftl.h 00:02:54.431 TEST_HEADER include/spdk/gpt_spec.h 00:02:54.431 TEST_HEADER include/spdk/hexlify.h 00:02:54.431 TEST_HEADER include/spdk/histogram_data.h 00:02:54.431 TEST_HEADER include/spdk/idxd.h 00:02:54.431 TEST_HEADER include/spdk/idxd_spec.h 00:02:54.431 TEST_HEADER include/spdk/init.h 00:02:54.431 TEST_HEADER include/spdk/ioat.h 00:02:54.431 LINK stub 00:02:54.431 TEST_HEADER include/spdk/ioat_spec.h 00:02:54.431 TEST_HEADER include/spdk/iscsi_spec.h 00:02:54.431 LINK idxd_perf 00:02:54.431 TEST_HEADER include/spdk/json.h 00:02:54.431 TEST_HEADER include/spdk/jsonrpc.h 00:02:54.431 TEST_HEADER include/spdk/keyring.h 00:02:54.431 TEST_HEADER include/spdk/keyring_module.h 00:02:54.431 TEST_HEADER include/spdk/likely.h 00:02:54.431 TEST_HEADER include/spdk/log.h 00:02:54.431 TEST_HEADER include/spdk/lvol.h 00:02:54.431 TEST_HEADER include/spdk/memory.h 00:02:54.431 TEST_HEADER include/spdk/mmio.h 00:02:54.431 TEST_HEADER include/spdk/nbd.h 00:02:54.431 TEST_HEADER include/spdk/net.h 00:02:54.431 TEST_HEADER include/spdk/notify.h 00:02:54.431 TEST_HEADER include/spdk/nvme.h 00:02:54.431 TEST_HEADER include/spdk/nvme_intel.h 00:02:54.431 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:54.431 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:54.431 TEST_HEADER include/spdk/nvme_spec.h 00:02:54.431 TEST_HEADER include/spdk/nvme_zns.h 00:02:54.689 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:54.689 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:54.689 TEST_HEADER include/spdk/nvmf.h 00:02:54.689 TEST_HEADER include/spdk/nvmf_spec.h 00:02:54.689 TEST_HEADER include/spdk/nvmf_transport.h 00:02:54.689 TEST_HEADER include/spdk/opal.h 00:02:54.689 LINK hello_blob 00:02:54.689 TEST_HEADER include/spdk/opal_spec.h 00:02:54.689 TEST_HEADER include/spdk/pci_ids.h 00:02:54.689 TEST_HEADER include/spdk/pipe.h 00:02:54.689 TEST_HEADER include/spdk/queue.h 00:02:54.689 TEST_HEADER include/spdk/reduce.h 00:02:54.689 TEST_HEADER include/spdk/rpc.h 00:02:54.689 TEST_HEADER include/spdk/scheduler.h 00:02:54.689 TEST_HEADER include/spdk/scsi.h 00:02:54.689 TEST_HEADER include/spdk/scsi_spec.h 00:02:54.689 TEST_HEADER include/spdk/sock.h 00:02:54.689 TEST_HEADER include/spdk/stdinc.h 00:02:54.689 TEST_HEADER include/spdk/string.h 00:02:54.689 TEST_HEADER include/spdk/thread.h 00:02:54.689 TEST_HEADER include/spdk/trace.h 00:02:54.689 TEST_HEADER include/spdk/trace_parser.h 00:02:54.689 TEST_HEADER include/spdk/tree.h 00:02:54.689 TEST_HEADER include/spdk/ublk.h 00:02:54.689 TEST_HEADER include/spdk/util.h 00:02:54.689 TEST_HEADER include/spdk/uuid.h 00:02:54.689 TEST_HEADER include/spdk/version.h 00:02:54.689 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:54.689 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:54.689 LINK vhost_fuzz 00:02:54.689 TEST_HEADER include/spdk/vhost.h 00:02:54.689 TEST_HEADER include/spdk/vmd.h 00:02:54.689 TEST_HEADER include/spdk/xor.h 00:02:54.689 TEST_HEADER include/spdk/zipf.h 00:02:54.689 CXX test/cpp_headers/accel.o 00:02:54.689 LINK accel_perf 00:02:54.689 LINK mkfs 00:02:54.689 CC test/env/mem_callbacks/mem_callbacks.o 00:02:54.689 CC examples/nvme/hello_world/hello_world.o 00:02:54.689 CC examples/blob/cli/blobcli.o 00:02:54.689 CXX test/cpp_headers/accel_module.o 00:02:54.948 CXX test/cpp_headers/assert.o 00:02:54.948 CXX test/cpp_headers/barrier.o 00:02:54.948 CC test/event/event_perf/event_perf.o 00:02:54.948 CC test/event/reactor/reactor.o 00:02:54.948 CC test/event/reactor_perf/reactor_perf.o 00:02:54.948 LINK hello_world 00:02:54.948 LINK event_perf 00:02:54.948 CXX test/cpp_headers/base64.o 00:02:54.948 LINK reactor 00:02:54.948 LINK reactor_perf 00:02:55.207 CC test/event/app_repeat/app_repeat.o 00:02:55.207 CC test/event/scheduler/scheduler.o 00:02:55.207 CXX test/cpp_headers/bdev.o 00:02:55.207 CXX test/cpp_headers/bdev_module.o 00:02:55.207 CC examples/nvme/reconnect/reconnect.o 00:02:55.207 LINK blobcli 00:02:55.207 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:55.207 LINK mem_callbacks 00:02:55.207 LINK app_repeat 00:02:55.465 LINK scheduler 00:02:55.465 CXX test/cpp_headers/bdev_zone.o 00:02:55.465 CC test/lvol/esnap/esnap.o 00:02:55.465 CC test/env/vtophys/vtophys.o 00:02:55.465 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:55.465 LINK reconnect 00:02:55.723 LINK iscsi_fuzz 00:02:55.723 CC test/nvme/aer/aer.o 00:02:55.723 CC test/env/memory/memory_ut.o 00:02:55.723 CXX test/cpp_headers/bit_array.o 00:02:55.723 CXX test/cpp_headers/bit_pool.o 00:02:55.723 LINK vtophys 00:02:55.723 CXX test/cpp_headers/blob_bdev.o 00:02:55.723 LINK env_dpdk_post_init 00:02:55.723 LINK nvme_manage 00:02:55.723 CXX test/cpp_headers/blobfs_bdev.o 00:02:55.981 CC test/rpc_client/rpc_client_test.o 00:02:55.981 LINK aer 00:02:55.981 CXX test/cpp_headers/blobfs.o 00:02:55.981 CXX test/cpp_headers/blob.o 00:02:55.981 CC test/env/pci/pci_ut.o 00:02:55.981 CC test/accel/dif/dif.o 00:02:55.981 CC examples/nvme/arbitration/arbitration.o 00:02:55.981 LINK rpc_client_test 00:02:55.981 CXX test/cpp_headers/conf.o 00:02:56.238 CC test/nvme/reset/reset.o 00:02:56.238 CXX test/cpp_headers/config.o 00:02:56.238 CC examples/bdev/hello_world/hello_bdev.o 00:02:56.238 CC examples/bdev/bdevperf/bdevperf.o 00:02:56.238 CXX test/cpp_headers/cpuset.o 00:02:56.238 CC test/nvme/sgl/sgl.o 00:02:56.238 LINK pci_ut 00:02:56.497 LINK arbitration 00:02:56.497 LINK reset 00:02:56.497 CXX test/cpp_headers/crc16.o 00:02:56.497 LINK dif 00:02:56.497 LINK hello_bdev 00:02:56.497 CXX test/cpp_headers/crc32.o 00:02:56.497 LINK sgl 00:02:56.497 CXX test/cpp_headers/crc64.o 00:02:56.755 CC examples/nvme/hotplug/hotplug.o 00:02:56.755 CC test/nvme/e2edp/nvme_dp.o 00:02:56.755 CXX test/cpp_headers/dif.o 00:02:56.755 LINK memory_ut 00:02:56.755 CXX test/cpp_headers/endian.o 00:02:56.755 CXX test/cpp_headers/dma.o 00:02:56.755 CXX test/cpp_headers/env_dpdk.o 00:02:57.014 CC test/nvme/overhead/overhead.o 00:02:57.014 LINK hotplug 00:02:57.014 LINK nvme_dp 00:02:57.014 CXX test/cpp_headers/env.o 00:02:57.014 CC test/bdev/bdevio/bdevio.o 00:02:57.014 CXX test/cpp_headers/event.o 00:02:57.014 CC test/nvme/err_injection/err_injection.o 00:02:57.014 CC test/nvme/startup/startup.o 00:02:57.014 LINK bdevperf 00:02:57.273 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:57.273 LINK overhead 00:02:57.273 CC test/nvme/reserve/reserve.o 00:02:57.273 CXX test/cpp_headers/fd_group.o 00:02:57.273 CC test/nvme/simple_copy/simple_copy.o 00:02:57.273 LINK err_injection 00:02:57.273 LINK startup 00:02:57.273 CXX test/cpp_headers/fd.o 00:02:57.273 LINK cmb_copy 00:02:57.273 LINK bdevio 00:02:57.532 LINK reserve 00:02:57.532 CC test/nvme/connect_stress/connect_stress.o 00:02:57.532 LINK simple_copy 00:02:57.532 CC test/nvme/boot_partition/boot_partition.o 00:02:57.532 CXX test/cpp_headers/file.o 00:02:57.532 CC test/nvme/compliance/nvme_compliance.o 00:02:57.532 CC test/nvme/fused_ordering/fused_ordering.o 00:02:57.532 CC examples/nvme/abort/abort.o 00:02:57.791 LINK connect_stress 00:02:57.791 LINK boot_partition 00:02:57.791 CXX test/cpp_headers/ftl.o 00:02:57.791 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:57.791 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:57.791 CC test/nvme/fdp/fdp.o 00:02:57.791 LINK fused_ordering 00:02:57.791 LINK nvme_compliance 00:02:57.791 CXX test/cpp_headers/gpt_spec.o 00:02:57.791 LINK pmr_persistence 00:02:57.791 CXX test/cpp_headers/hexlify.o 00:02:58.050 LINK doorbell_aers 00:02:58.050 CC test/nvme/cuse/cuse.o 00:02:58.050 CXX test/cpp_headers/histogram_data.o 00:02:58.050 CXX test/cpp_headers/idxd.o 00:02:58.050 CXX test/cpp_headers/idxd_spec.o 00:02:58.050 CXX test/cpp_headers/init.o 00:02:58.050 CXX test/cpp_headers/ioat.o 00:02:58.050 CXX test/cpp_headers/ioat_spec.o 00:02:58.050 LINK abort 00:02:58.050 LINK fdp 00:02:58.050 CXX test/cpp_headers/iscsi_spec.o 00:02:58.309 CXX test/cpp_headers/json.o 00:02:58.309 CXX test/cpp_headers/jsonrpc.o 00:02:58.309 CXX test/cpp_headers/keyring.o 00:02:58.309 CXX test/cpp_headers/keyring_module.o 00:02:58.309 CXX test/cpp_headers/likely.o 00:02:58.309 CXX test/cpp_headers/log.o 00:02:58.309 CXX test/cpp_headers/lvol.o 00:02:58.309 CXX test/cpp_headers/memory.o 00:02:58.309 CXX test/cpp_headers/mmio.o 00:02:58.309 CXX test/cpp_headers/nbd.o 00:02:58.569 CXX test/cpp_headers/net.o 00:02:58.569 CXX test/cpp_headers/notify.o 00:02:58.569 CXX test/cpp_headers/nvme.o 00:02:58.569 CXX test/cpp_headers/nvme_intel.o 00:02:58.569 CXX test/cpp_headers/nvme_ocssd.o 00:02:58.569 CC examples/nvmf/nvmf/nvmf.o 00:02:58.569 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:58.569 CXX test/cpp_headers/nvme_spec.o 00:02:58.569 CXX test/cpp_headers/nvme_zns.o 00:02:58.569 CXX test/cpp_headers/nvmf_cmd.o 00:02:58.569 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:58.569 CXX test/cpp_headers/nvmf.o 00:02:58.569 CXX test/cpp_headers/nvmf_spec.o 00:02:58.827 CXX test/cpp_headers/nvmf_transport.o 00:02:58.827 CXX test/cpp_headers/opal.o 00:02:58.827 CXX test/cpp_headers/opal_spec.o 00:02:58.827 CXX test/cpp_headers/pci_ids.o 00:02:58.827 CXX test/cpp_headers/pipe.o 00:02:58.827 LINK nvmf 00:02:58.827 CXX test/cpp_headers/queue.o 00:02:58.827 CXX test/cpp_headers/reduce.o 00:02:58.827 CXX test/cpp_headers/rpc.o 00:02:59.086 CXX test/cpp_headers/scheduler.o 00:02:59.086 CXX test/cpp_headers/scsi.o 00:02:59.086 CXX test/cpp_headers/scsi_spec.o 00:02:59.086 CXX test/cpp_headers/sock.o 00:02:59.086 CXX test/cpp_headers/stdinc.o 00:02:59.086 CXX test/cpp_headers/string.o 00:02:59.086 CXX test/cpp_headers/thread.o 00:02:59.086 CXX test/cpp_headers/trace.o 00:02:59.086 CXX test/cpp_headers/trace_parser.o 00:02:59.086 CXX test/cpp_headers/tree.o 00:02:59.086 CXX test/cpp_headers/ublk.o 00:02:59.086 CXX test/cpp_headers/util.o 00:02:59.086 CXX test/cpp_headers/uuid.o 00:02:59.086 CXX test/cpp_headers/version.o 00:02:59.385 CXX test/cpp_headers/vfio_user_pci.o 00:02:59.385 CXX test/cpp_headers/vfio_user_spec.o 00:02:59.385 LINK cuse 00:02:59.385 CXX test/cpp_headers/vhost.o 00:02:59.385 CXX test/cpp_headers/vmd.o 00:02:59.385 CXX test/cpp_headers/xor.o 00:02:59.385 CXX test/cpp_headers/zipf.o 00:03:00.852 LINK esnap 00:03:01.115 00:03:01.115 real 1m3.717s 00:03:01.115 user 6m31.334s 00:03:01.115 sys 1m39.124s 00:03:01.115 10:41:30 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:01.115 ************************************ 00:03:01.115 END TEST make 00:03:01.115 ************************************ 00:03:01.115 10:41:30 make -- common/autotest_common.sh@10 -- $ set +x 00:03:01.115 10:41:30 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:01.115 10:41:30 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:01.115 10:41:30 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:01.115 10:41:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.115 10:41:30 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:01.115 10:41:30 -- pm/common@44 -- $ pid=5148 00:03:01.115 10:41:30 -- pm/common@50 -- $ kill -TERM 5148 00:03:01.115 10:41:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.115 10:41:30 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:01.115 10:41:30 -- pm/common@44 -- $ pid=5149 00:03:01.115 10:41:30 -- pm/common@50 -- $ kill -TERM 5149 00:03:01.115 10:41:30 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:01.115 10:41:30 -- nvmf/common.sh@7 -- # uname -s 00:03:01.115 10:41:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:01.115 10:41:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:01.115 10:41:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:01.115 10:41:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:01.115 10:41:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:01.115 10:41:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:01.115 10:41:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:01.115 10:41:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:01.115 10:41:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:01.115 10:41:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:01.115 10:41:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:03:01.115 10:41:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=bb4b8bd3-cfb4-4368-bf29-91254747069c 00:03:01.115 10:41:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:01.115 10:41:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:01.115 10:41:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:01.115 10:41:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:01.115 10:41:30 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:01.115 10:41:30 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:01.115 10:41:30 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:01.115 10:41:30 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:01.115 10:41:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:01.115 10:41:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:01.116 10:41:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:01.116 10:41:30 -- paths/export.sh@5 -- # export PATH 00:03:01.116 10:41:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:01.116 10:41:30 -- nvmf/common.sh@47 -- # : 0 00:03:01.116 10:41:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:01.116 10:41:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:01.116 10:41:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:01.116 10:41:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:01.116 10:41:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:01.116 10:41:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:01.116 10:41:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:01.116 10:41:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:01.116 10:41:30 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:01.116 10:41:30 -- spdk/autotest.sh@32 -- # uname -s 00:03:01.116 10:41:30 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:01.116 10:41:30 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:01.116 10:41:30 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:01.116 10:41:30 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:01.116 10:41:30 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:01.116 10:41:30 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:01.116 10:41:30 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:01.116 10:41:30 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:01.116 10:41:30 -- spdk/autotest.sh@48 -- # udevadm_pid=52795 00:03:01.116 10:41:30 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:01.116 10:41:30 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:01.116 10:41:30 -- pm/common@17 -- # local monitor 00:03:01.116 10:41:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.116 10:41:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.116 10:41:30 -- pm/common@21 -- # date +%s 00:03:01.116 10:41:30 -- pm/common@21 -- # date +%s 00:03:01.116 10:41:30 -- pm/common@25 -- # sleep 1 00:03:01.116 10:41:30 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721904090 00:03:01.116 10:41:30 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721904090 00:03:01.116 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721904090_collect-vmstat.pm.log 00:03:01.116 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721904090_collect-cpu-load.pm.log 00:03:02.491 10:41:31 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:02.491 10:41:31 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:02.491 10:41:31 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:02.491 10:41:31 -- common/autotest_common.sh@10 -- # set +x 00:03:02.491 10:41:31 -- spdk/autotest.sh@59 -- # create_test_list 00:03:02.491 10:41:31 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:02.491 10:41:31 -- common/autotest_common.sh@10 -- # set +x 00:03:02.491 10:41:31 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:02.491 10:41:31 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:02.491 10:41:31 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:02.491 10:41:31 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:02.491 10:41:31 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:02.491 10:41:31 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:02.491 10:41:31 -- common/autotest_common.sh@1455 -- # uname 00:03:02.491 10:41:31 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:02.491 10:41:31 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:02.491 10:41:31 -- common/autotest_common.sh@1475 -- # uname 00:03:02.491 10:41:31 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:02.491 10:41:31 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:02.491 10:41:31 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:02.491 10:41:31 -- spdk/autotest.sh@72 -- # hash lcov 00:03:02.491 10:41:31 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:02.491 10:41:31 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:02.491 --rc lcov_branch_coverage=1 00:03:02.491 --rc lcov_function_coverage=1 00:03:02.491 --rc genhtml_branch_coverage=1 00:03:02.491 --rc genhtml_function_coverage=1 00:03:02.491 --rc genhtml_legend=1 00:03:02.491 --rc geninfo_all_blocks=1 00:03:02.491 ' 00:03:02.491 10:41:31 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:02.491 --rc lcov_branch_coverage=1 00:03:02.491 --rc lcov_function_coverage=1 00:03:02.491 --rc genhtml_branch_coverage=1 00:03:02.491 --rc genhtml_function_coverage=1 00:03:02.491 --rc genhtml_legend=1 00:03:02.491 --rc geninfo_all_blocks=1 00:03:02.491 ' 00:03:02.491 10:41:31 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:02.491 --rc lcov_branch_coverage=1 00:03:02.491 --rc lcov_function_coverage=1 00:03:02.491 --rc genhtml_branch_coverage=1 00:03:02.491 --rc genhtml_function_coverage=1 00:03:02.491 --rc genhtml_legend=1 00:03:02.491 --rc geninfo_all_blocks=1 00:03:02.491 --no-external' 00:03:02.491 10:41:31 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:02.491 --rc lcov_branch_coverage=1 00:03:02.491 --rc lcov_function_coverage=1 00:03:02.491 --rc genhtml_branch_coverage=1 00:03:02.491 --rc genhtml_function_coverage=1 00:03:02.491 --rc genhtml_legend=1 00:03:02.491 --rc geninfo_all_blocks=1 00:03:02.491 --no-external' 00:03:02.491 10:41:31 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:02.491 lcov: LCOV version 1.14 00:03:02.491 10:41:31 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:17.375 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:17.375 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:32.242 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:32.242 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:32.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:32.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:32.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:32.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:32.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:32.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:32.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:32.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:32.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:32.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:32.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:32.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:32.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:32.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:32.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:32.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:32.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:32.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:32.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:32.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:32.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:32.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:32.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:32.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:32.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:32.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:32.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:32.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:32.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:32.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:32.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:32.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:32.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:32.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:32.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:32.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:32.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:32.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:32.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:32.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:32.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:32.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:32.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:32.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:32.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:32.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:32.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:32.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:32.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:32.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:32.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:32.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:32.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:32.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:32.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:32.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:32.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:32.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:32.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:32.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:32.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:32.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:32.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:32.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:32.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:32.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:32.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:32.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:32.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:32.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:32.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:32.243 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:32.243 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:34.141 10:42:03 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:34.141 10:42:03 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:34.141 10:42:03 -- common/autotest_common.sh@10 -- # set +x 00:03:34.141 10:42:03 -- spdk/autotest.sh@91 -- # rm -f 00:03:34.141 10:42:03 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:34.708 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:34.965 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:34.965 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:34.965 10:42:04 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:34.965 10:42:04 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:34.965 10:42:04 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:34.965 10:42:04 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:34.965 10:42:04 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:34.965 10:42:04 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:34.965 10:42:04 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:34.965 10:42:04 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:34.965 10:42:04 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:34.965 10:42:04 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:34.965 10:42:04 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:34.965 10:42:04 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:34.965 10:42:04 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:34.965 10:42:04 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:34.965 10:42:04 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:34.965 10:42:04 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:34.965 10:42:04 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:34.965 10:42:04 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:34.965 10:42:04 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:34.965 10:42:04 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:34.965 10:42:04 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:34.966 10:42:04 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:34.966 10:42:04 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:34.966 10:42:04 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:34.966 10:42:04 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:34.966 10:42:04 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:34.966 10:42:04 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:34.966 10:42:04 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:34.966 10:42:04 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:34.966 10:42:04 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:34.966 No valid GPT data, bailing 00:03:34.966 10:42:04 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:34.966 10:42:04 -- scripts/common.sh@391 -- # pt= 00:03:34.966 10:42:04 -- scripts/common.sh@392 -- # return 1 00:03:34.966 10:42:04 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:34.966 1+0 records in 00:03:34.966 1+0 records out 00:03:34.966 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00485456 s, 216 MB/s 00:03:34.966 10:42:04 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:34.966 10:42:04 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:34.966 10:42:04 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:03:34.966 10:42:04 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:03:34.966 10:42:04 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:34.966 No valid GPT data, bailing 00:03:34.966 10:42:04 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:34.966 10:42:04 -- scripts/common.sh@391 -- # pt= 00:03:34.966 10:42:04 -- scripts/common.sh@392 -- # return 1 00:03:34.966 10:42:04 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:34.966 1+0 records in 00:03:34.966 1+0 records out 00:03:34.966 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00480733 s, 218 MB/s 00:03:34.966 10:42:04 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:34.966 10:42:04 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:34.966 10:42:04 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:03:34.966 10:42:04 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:03:34.966 10:42:04 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:35.224 No valid GPT data, bailing 00:03:35.224 10:42:04 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:35.224 10:42:04 -- scripts/common.sh@391 -- # pt= 00:03:35.224 10:42:04 -- scripts/common.sh@392 -- # return 1 00:03:35.224 10:42:04 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:35.224 1+0 records in 00:03:35.224 1+0 records out 00:03:35.224 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00477811 s, 219 MB/s 00:03:35.224 10:42:04 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:35.224 10:42:04 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:35.224 10:42:04 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:03:35.224 10:42:04 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:03:35.224 10:42:04 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:35.224 No valid GPT data, bailing 00:03:35.224 10:42:04 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:35.224 10:42:04 -- scripts/common.sh@391 -- # pt= 00:03:35.224 10:42:04 -- scripts/common.sh@392 -- # return 1 00:03:35.224 10:42:04 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:35.224 1+0 records in 00:03:35.224 1+0 records out 00:03:35.224 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00483563 s, 217 MB/s 00:03:35.224 10:42:04 -- spdk/autotest.sh@118 -- # sync 00:03:35.224 10:42:04 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:35.224 10:42:04 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:35.224 10:42:04 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:37.125 10:42:06 -- spdk/autotest.sh@124 -- # uname -s 00:03:37.125 10:42:06 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:37.125 10:42:06 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:37.125 10:42:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:37.125 10:42:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:37.125 10:42:06 -- common/autotest_common.sh@10 -- # set +x 00:03:37.125 ************************************ 00:03:37.125 START TEST setup.sh 00:03:37.125 ************************************ 00:03:37.125 10:42:06 setup.sh -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:37.125 * Looking for test storage... 00:03:37.125 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:37.125 10:42:06 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:37.125 10:42:06 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:37.125 10:42:06 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:37.125 10:42:06 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:37.125 10:42:06 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:37.125 10:42:06 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:37.125 ************************************ 00:03:37.125 START TEST acl 00:03:37.125 ************************************ 00:03:37.125 10:42:06 setup.sh.acl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:37.388 * Looking for test storage... 00:03:37.388 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:37.388 10:42:06 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:37.388 10:42:06 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:37.388 10:42:06 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:37.388 10:42:06 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:37.388 10:42:06 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:37.388 10:42:06 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:37.388 10:42:06 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:37.388 10:42:06 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:37.388 10:42:06 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:37.388 10:42:06 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:37.388 10:42:06 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:37.388 10:42:06 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:37.388 10:42:06 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:37.388 10:42:06 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:37.388 10:42:06 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:37.388 10:42:06 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:37.388 10:42:06 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:37.388 10:42:06 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:37.388 10:42:06 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:37.388 10:42:06 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:37.388 10:42:06 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:37.388 10:42:06 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:37.388 10:42:06 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:37.388 10:42:06 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:37.388 10:42:06 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:37.388 10:42:06 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:37.388 10:42:06 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:37.388 10:42:06 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:37.388 10:42:06 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:37.388 10:42:06 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:37.388 10:42:06 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:37.954 10:42:07 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:37.954 10:42:07 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:37.954 10:42:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.954 10:42:07 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:37.954 10:42:07 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.954 10:42:07 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:38.520 10:42:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:03:38.520 10:42:08 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:38.520 10:42:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:38.520 Hugepages 00:03:38.520 node hugesize free / total 00:03:38.520 10:42:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:38.520 10:42:08 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:38.520 10:42:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:38.520 00:03:38.520 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:38.520 10:42:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:38.520 10:42:08 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:38.520 10:42:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:38.778 10:42:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:38.778 10:42:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:38.778 10:42:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:38.778 10:42:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:38.778 10:42:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:03:38.778 10:42:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:38.778 10:42:08 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:38.778 10:42:08 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:38.778 10:42:08 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:38.778 10:42:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:38.778 10:42:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:03:38.778 10:42:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:38.778 10:42:08 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:38.778 10:42:08 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:38.778 10:42:08 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:38.778 10:42:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:38.778 10:42:08 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:03:38.778 10:42:08 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:38.778 10:42:08 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:38.778 10:42:08 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:38.778 10:42:08 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:38.778 ************************************ 00:03:38.778 START TEST denied 00:03:38.778 ************************************ 00:03:38.778 10:42:08 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:03:38.778 10:42:08 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:03:38.778 10:42:08 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:38.778 10:42:08 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:03:38.778 10:42:08 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:38.778 10:42:08 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:39.713 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:03:39.714 10:42:09 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:03:39.714 10:42:09 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:39.714 10:42:09 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:39.714 10:42:09 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:03:39.714 10:42:09 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:03:39.714 10:42:09 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:39.714 10:42:09 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:39.714 10:42:09 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:39.714 10:42:09 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:39.714 10:42:09 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:40.282 00:03:40.282 real 0m1.382s 00:03:40.282 user 0m0.572s 00:03:40.282 sys 0m0.742s 00:03:40.282 10:42:09 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:40.282 10:42:09 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:40.282 ************************************ 00:03:40.282 END TEST denied 00:03:40.282 ************************************ 00:03:40.282 10:42:09 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:40.282 10:42:09 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:40.282 10:42:09 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:40.282 10:42:09 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:40.282 ************************************ 00:03:40.282 START TEST allowed 00:03:40.282 ************************************ 00:03:40.282 10:42:09 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:03:40.282 10:42:09 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:03:40.282 10:42:09 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:03:40.282 10:42:09 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:40.282 10:42:09 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.282 10:42:09 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:41.216 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:41.216 10:42:10 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:03:41.216 10:42:10 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:41.216 10:42:10 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:03:41.216 10:42:10 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:03:41.216 10:42:10 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:03:41.216 10:42:10 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:41.216 10:42:10 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:41.216 10:42:10 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:41.216 10:42:10 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:41.216 10:42:10 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:41.847 00:03:41.847 real 0m1.549s 00:03:41.847 user 0m0.656s 00:03:41.847 sys 0m0.858s 00:03:41.847 10:42:11 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:41.847 10:42:11 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:41.847 ************************************ 00:03:41.847 END TEST allowed 00:03:41.847 ************************************ 00:03:41.847 ************************************ 00:03:41.847 END TEST acl 00:03:41.847 ************************************ 00:03:41.847 00:03:41.847 real 0m4.663s 00:03:41.847 user 0m1.992s 00:03:41.847 sys 0m2.571s 00:03:41.847 10:42:11 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:41.847 10:42:11 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:41.847 10:42:11 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:41.847 10:42:11 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:41.847 10:42:11 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:41.847 10:42:11 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:41.847 ************************************ 00:03:41.847 START TEST hugepages 00:03:41.847 ************************************ 00:03:41.847 10:42:11 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:42.107 * Looking for test storage... 00:03:42.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 5996152 kB' 'MemAvailable: 7393924 kB' 'Buffers: 2436 kB' 'Cached: 1612292 kB' 'SwapCached: 0 kB' 'Active: 435452 kB' 'Inactive: 1283396 kB' 'Active(anon): 114608 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1283396 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 105800 kB' 'Mapped: 48664 kB' 'Shmem: 10488 kB' 'KReclaimable: 61392 kB' 'Slab: 133040 kB' 'SReclaimable: 61392 kB' 'SUnreclaim: 71648 kB' 'KernelStack: 6444 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 333264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.107 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.108 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:42.109 10:42:11 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:42.109 10:42:11 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:42.109 10:42:11 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:42.109 10:42:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:42.109 ************************************ 00:03:42.109 START TEST default_setup 00:03:42.109 ************************************ 00:03:42.109 10:42:11 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:03:42.109 10:42:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:42.109 10:42:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:42.109 10:42:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:42.109 10:42:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:42.109 10:42:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:42.109 10:42:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:42.109 10:42:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:42.109 10:42:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:42.109 10:42:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:42.109 10:42:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:42.109 10:42:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:42.109 10:42:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:42.109 10:42:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:42.109 10:42:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:42.109 10:42:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:42.109 10:42:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:42.109 10:42:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:42.109 10:42:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:42.109 10:42:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:42.109 10:42:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:42.109 10:42:11 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.109 10:42:11 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:42.676 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:42.938 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:42.938 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8079312 kB' 'MemAvailable: 9477032 kB' 'Buffers: 2436 kB' 'Cached: 1612284 kB' 'SwapCached: 0 kB' 'Active: 452304 kB' 'Inactive: 1283400 kB' 'Active(anon): 131460 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1283400 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 122356 kB' 'Mapped: 48668 kB' 'Shmem: 10464 kB' 'KReclaimable: 61280 kB' 'Slab: 132924 kB' 'SReclaimable: 61280 kB' 'SUnreclaim: 71644 kB' 'KernelStack: 6420 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.939 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.940 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8079832 kB' 'MemAvailable: 9477452 kB' 'Buffers: 2436 kB' 'Cached: 1612284 kB' 'SwapCached: 0 kB' 'Active: 452156 kB' 'Inactive: 1283400 kB' 'Active(anon): 131312 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1283400 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 122388 kB' 'Mapped: 48612 kB' 'Shmem: 10464 kB' 'KReclaimable: 61076 kB' 'Slab: 132712 kB' 'SReclaimable: 61076 kB' 'SUnreclaim: 71636 kB' 'KernelStack: 6432 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.941 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.942 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8079968 kB' 'MemAvailable: 9477592 kB' 'Buffers: 2436 kB' 'Cached: 1612284 kB' 'SwapCached: 0 kB' 'Active: 452072 kB' 'Inactive: 1283404 kB' 'Active(anon): 131228 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1283404 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 122388 kB' 'Mapped: 48612 kB' 'Shmem: 10464 kB' 'KReclaimable: 61076 kB' 'Slab: 132712 kB' 'SReclaimable: 61076 kB' 'SUnreclaim: 71636 kB' 'KernelStack: 6432 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.943 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.944 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:42.945 nr_hugepages=1024 00:03:42.945 resv_hugepages=0 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:42.945 surplus_hugepages=0 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:42.945 anon_hugepages=0 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8079968 kB' 'MemAvailable: 9477592 kB' 'Buffers: 2436 kB' 'Cached: 1612284 kB' 'SwapCached: 0 kB' 'Active: 452140 kB' 'Inactive: 1283404 kB' 'Active(anon): 131296 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1283404 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 122408 kB' 'Mapped: 48612 kB' 'Shmem: 10464 kB' 'KReclaimable: 61076 kB' 'Slab: 132708 kB' 'SReclaimable: 61076 kB' 'SUnreclaim: 71632 kB' 'KernelStack: 6416 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.945 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.946 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8080392 kB' 'MemUsed: 4161580 kB' 'SwapCached: 0 kB' 'Active: 452052 kB' 'Inactive: 1283404 kB' 'Active(anon): 131208 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1283404 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'FilePages: 1614720 kB' 'Mapped: 48612 kB' 'AnonPages: 122352 kB' 'Shmem: 10464 kB' 'KernelStack: 6416 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61076 kB' 'Slab: 132704 kB' 'SReclaimable: 61076 kB' 'SUnreclaim: 71628 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.947 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.948 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.948 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.948 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.948 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.948 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.948 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:43.207 10:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:43.208 10:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:43.208 10:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:43.208 node0=1024 expecting 1024 00:03:43.208 10:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:43.208 10:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:43.208 00:03:43.208 real 0m0.968s 00:03:43.208 user 0m0.480s 00:03:43.208 sys 0m0.450s 00:03:43.208 10:42:12 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:43.208 10:42:12 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:43.208 ************************************ 00:03:43.208 END TEST default_setup 00:03:43.208 ************************************ 00:03:43.208 10:42:12 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:43.208 10:42:12 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:43.208 10:42:12 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:43.208 10:42:12 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:43.208 ************************************ 00:03:43.208 START TEST per_node_1G_alloc 00:03:43.208 ************************************ 00:03:43.208 10:42:12 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:03:43.208 10:42:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:43.208 10:42:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:03:43.208 10:42:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:43.208 10:42:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:43.208 10:42:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:43.208 10:42:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:43.208 10:42:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:43.208 10:42:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:43.208 10:42:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:43.208 10:42:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:43.208 10:42:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:43.208 10:42:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:43.208 10:42:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:43.208 10:42:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:43.208 10:42:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:43.208 10:42:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:43.208 10:42:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:43.208 10:42:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:43.208 10:42:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:43.208 10:42:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:43.208 10:42:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:43.208 10:42:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:03:43.208 10:42:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:43.208 10:42:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.208 10:42:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:43.471 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:43.471 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:43.471 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:43.471 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:03:43.471 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:43.471 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:43.471 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:43.471 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:43.471 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:43.471 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:43.471 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:43.471 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:43.471 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:43.471 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:43.471 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:43.471 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:43.471 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.471 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.471 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.471 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.471 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.471 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.471 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.471 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.471 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9133676 kB' 'MemAvailable: 10531308 kB' 'Buffers: 2436 kB' 'Cached: 1612284 kB' 'SwapCached: 0 kB' 'Active: 452492 kB' 'Inactive: 1283412 kB' 'Active(anon): 131648 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1283412 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 122748 kB' 'Mapped: 48740 kB' 'Shmem: 10464 kB' 'KReclaimable: 61076 kB' 'Slab: 132752 kB' 'SReclaimable: 61076 kB' 'SUnreclaim: 71676 kB' 'KernelStack: 6456 kB' 'PageTables: 4156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:43.471 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.471 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.471 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.471 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.471 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.471 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.471 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.471 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.472 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9133676 kB' 'MemAvailable: 10531308 kB' 'Buffers: 2436 kB' 'Cached: 1612284 kB' 'SwapCached: 0 kB' 'Active: 452484 kB' 'Inactive: 1283412 kB' 'Active(anon): 131640 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1283412 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 122816 kB' 'Mapped: 49000 kB' 'Shmem: 10464 kB' 'KReclaimable: 61076 kB' 'Slab: 132744 kB' 'SReclaimable: 61076 kB' 'SUnreclaim: 71668 kB' 'KernelStack: 6472 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 353504 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.473 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.474 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9133676 kB' 'MemAvailable: 10531308 kB' 'Buffers: 2436 kB' 'Cached: 1612284 kB' 'SwapCached: 0 kB' 'Active: 452020 kB' 'Inactive: 1283412 kB' 'Active(anon): 131176 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1283412 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 122352 kB' 'Mapped: 48812 kB' 'Shmem: 10464 kB' 'KReclaimable: 61076 kB' 'Slab: 132760 kB' 'SReclaimable: 61076 kB' 'SUnreclaim: 71684 kB' 'KernelStack: 6432 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.475 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.476 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:43.477 nr_hugepages=512 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:43.477 resv_hugepages=0 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:43.477 surplus_hugepages=0 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:43.477 anon_hugepages=0 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.477 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9133428 kB' 'MemAvailable: 10531060 kB' 'Buffers: 2436 kB' 'Cached: 1612284 kB' 'SwapCached: 0 kB' 'Active: 452336 kB' 'Inactive: 1283412 kB' 'Active(anon): 131492 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1283412 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 122668 kB' 'Mapped: 48612 kB' 'Shmem: 10464 kB' 'KReclaimable: 61076 kB' 'Slab: 132736 kB' 'SReclaimable: 61076 kB' 'SUnreclaim: 71660 kB' 'KernelStack: 6416 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.478 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.479 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.741 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.741 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.741 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:43.742 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9133428 kB' 'MemUsed: 3108544 kB' 'SwapCached: 0 kB' 'Active: 452056 kB' 'Inactive: 1283412 kB' 'Active(anon): 131212 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1283412 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'FilePages: 1614720 kB' 'Mapped: 48612 kB' 'AnonPages: 122416 kB' 'Shmem: 10464 kB' 'KernelStack: 6432 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61076 kB' 'Slab: 132720 kB' 'SReclaimable: 61076 kB' 'SUnreclaim: 71644 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.743 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:43.744 node0=512 expecting 512 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:43.744 00:03:43.744 real 0m0.516s 00:03:43.744 user 0m0.255s 00:03:43.744 sys 0m0.292s 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:43.744 10:42:13 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:43.744 ************************************ 00:03:43.744 END TEST per_node_1G_alloc 00:03:43.744 ************************************ 00:03:43.744 10:42:13 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:43.744 10:42:13 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:43.744 10:42:13 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:43.744 10:42:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:43.744 ************************************ 00:03:43.744 START TEST even_2G_alloc 00:03:43.744 ************************************ 00:03:43.744 10:42:13 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:03:43.744 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:43.744 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:43.744 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:43.744 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:43.744 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:43.744 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:43.744 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:43.744 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:43.744 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:43.744 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:43.744 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:43.744 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:43.744 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:43.744 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:43.744 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:43.744 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:03:43.744 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:43.744 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:43.744 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:43.744 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:43.744 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:43.744 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:43.744 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.744 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:44.007 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:44.007 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:44.007 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8083440 kB' 'MemAvailable: 9481072 kB' 'Buffers: 2436 kB' 'Cached: 1612284 kB' 'SwapCached: 0 kB' 'Active: 452624 kB' 'Inactive: 1283412 kB' 'Active(anon): 131780 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1283412 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122888 kB' 'Mapped: 48892 kB' 'Shmem: 10464 kB' 'KReclaimable: 61076 kB' 'Slab: 132796 kB' 'SReclaimable: 61076 kB' 'SUnreclaim: 71720 kB' 'KernelStack: 6452 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.007 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8083188 kB' 'MemAvailable: 9480820 kB' 'Buffers: 2436 kB' 'Cached: 1612284 kB' 'SwapCached: 0 kB' 'Active: 452420 kB' 'Inactive: 1283412 kB' 'Active(anon): 131576 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1283412 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122728 kB' 'Mapped: 48756 kB' 'Shmem: 10464 kB' 'KReclaimable: 61076 kB' 'Slab: 132804 kB' 'SReclaimable: 61076 kB' 'SUnreclaim: 71728 kB' 'KernelStack: 6448 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 354584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.008 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.009 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8083188 kB' 'MemAvailable: 9480820 kB' 'Buffers: 2436 kB' 'Cached: 1612284 kB' 'SwapCached: 0 kB' 'Active: 452360 kB' 'Inactive: 1283412 kB' 'Active(anon): 131516 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1283412 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122668 kB' 'Mapped: 48756 kB' 'Shmem: 10464 kB' 'KReclaimable: 61076 kB' 'Slab: 132800 kB' 'SReclaimable: 61076 kB' 'SUnreclaim: 71724 kB' 'KernelStack: 6432 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.010 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.011 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.012 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.012 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.012 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.012 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.012 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.012 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.012 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.012 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.012 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.012 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.012 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.012 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.012 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.012 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.012 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.012 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.012 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.012 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.012 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.012 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.012 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.012 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.012 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.012 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.012 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:44.272 nr_hugepages=1024 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:44.272 resv_hugepages=0 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:44.272 surplus_hugepages=0 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:44.272 anon_hugepages=0 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:44.272 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8085424 kB' 'MemAvailable: 9483056 kB' 'Buffers: 2436 kB' 'Cached: 1612284 kB' 'SwapCached: 0 kB' 'Active: 452176 kB' 'Inactive: 1283412 kB' 'Active(anon): 131332 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1283412 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122476 kB' 'Mapped: 48616 kB' 'Shmem: 10464 kB' 'KReclaimable: 61076 kB' 'Slab: 132796 kB' 'SReclaimable: 61076 kB' 'SUnreclaim: 71720 kB' 'KernelStack: 6432 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.274 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8085424 kB' 'MemUsed: 4156548 kB' 'SwapCached: 0 kB' 'Active: 452028 kB' 'Inactive: 1283412 kB' 'Active(anon): 131184 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1283412 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1614720 kB' 'Mapped: 48616 kB' 'AnonPages: 122312 kB' 'Shmem: 10464 kB' 'KernelStack: 6416 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61076 kB' 'Slab: 132788 kB' 'SReclaimable: 61076 kB' 'SUnreclaim: 71712 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:44.276 node0=1024 expecting 1024 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:44.276 00:03:44.276 real 0m0.532s 00:03:44.276 user 0m0.269s 00:03:44.276 sys 0m0.298s 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:44.276 10:42:13 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:44.276 ************************************ 00:03:44.276 END TEST even_2G_alloc 00:03:44.276 ************************************ 00:03:44.276 10:42:13 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:44.276 10:42:13 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:44.276 10:42:13 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:44.276 10:42:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:44.276 ************************************ 00:03:44.276 START TEST odd_alloc 00:03:44.276 ************************************ 00:03:44.276 10:42:13 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:03:44.276 10:42:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:44.276 10:42:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:44.276 10:42:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:44.276 10:42:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:44.276 10:42:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:44.276 10:42:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:44.276 10:42:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:44.276 10:42:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:44.276 10:42:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:44.276 10:42:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:44.276 10:42:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:44.276 10:42:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:44.276 10:42:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:44.276 10:42:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:44.276 10:42:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:44.276 10:42:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:03:44.276 10:42:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:44.276 10:42:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:44.276 10:42:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:44.276 10:42:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:44.276 10:42:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:44.276 10:42:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:44.276 10:42:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.276 10:42:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:44.535 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:44.535 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:44.535 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8083488 kB' 'MemAvailable: 9481124 kB' 'Buffers: 2436 kB' 'Cached: 1612288 kB' 'SwapCached: 0 kB' 'Active: 452488 kB' 'Inactive: 1283416 kB' 'Active(anon): 131644 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1283416 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122852 kB' 'Mapped: 48764 kB' 'Shmem: 10464 kB' 'KReclaimable: 61076 kB' 'Slab: 132760 kB' 'SReclaimable: 61076 kB' 'SUnreclaim: 71684 kB' 'KernelStack: 6424 kB' 'PageTables: 4060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 350448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.536 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.799 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.799 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.799 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.799 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.799 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.799 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.799 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.799 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.799 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.799 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.799 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.799 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.799 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.799 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.799 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8083488 kB' 'MemAvailable: 9481124 kB' 'Buffers: 2436 kB' 'Cached: 1612288 kB' 'SwapCached: 0 kB' 'Active: 451968 kB' 'Inactive: 1283416 kB' 'Active(anon): 131124 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1283416 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122248 kB' 'Mapped: 48764 kB' 'Shmem: 10464 kB' 'KReclaimable: 61076 kB' 'Slab: 132768 kB' 'SReclaimable: 61076 kB' 'SUnreclaim: 71692 kB' 'KernelStack: 6424 kB' 'PageTables: 4044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 350448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.800 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.801 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.802 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8083488 kB' 'MemAvailable: 9481124 kB' 'Buffers: 2436 kB' 'Cached: 1612288 kB' 'SwapCached: 0 kB' 'Active: 451868 kB' 'Inactive: 1283416 kB' 'Active(anon): 131024 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1283416 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122148 kB' 'Mapped: 48764 kB' 'Shmem: 10464 kB' 'KReclaimable: 61076 kB' 'Slab: 132768 kB' 'SReclaimable: 61076 kB' 'SUnreclaim: 71692 kB' 'KernelStack: 6392 kB' 'PageTables: 3948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 350448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.803 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:44.804 nr_hugepages=1025 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:44.804 resv_hugepages=0 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:44.804 surplus_hugepages=0 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:44.804 anon_hugepages=0 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:44.804 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8083488 kB' 'MemAvailable: 9481124 kB' 'Buffers: 2436 kB' 'Cached: 1612288 kB' 'SwapCached: 0 kB' 'Active: 451868 kB' 'Inactive: 1283416 kB' 'Active(anon): 131024 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1283416 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122148 kB' 'Mapped: 48764 kB' 'Shmem: 10464 kB' 'KReclaimable: 61076 kB' 'Slab: 132768 kB' 'SReclaimable: 61076 kB' 'SUnreclaim: 71692 kB' 'KernelStack: 6392 kB' 'PageTables: 3948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 350448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.805 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.806 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8083488 kB' 'MemUsed: 4158484 kB' 'SwapCached: 0 kB' 'Active: 452104 kB' 'Inactive: 1283416 kB' 'Active(anon): 131260 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1283416 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1614724 kB' 'Mapped: 48636 kB' 'AnonPages: 122428 kB' 'Shmem: 10464 kB' 'KernelStack: 6432 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61076 kB' 'Slab: 132768 kB' 'SReclaimable: 61076 kB' 'SUnreclaim: 71692 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.807 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:44.808 node0=1025 expecting 1025 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:03:44.808 00:03:44.808 real 0m0.524s 00:03:44.808 user 0m0.265s 00:03:44.808 sys 0m0.290s 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:44.808 10:42:14 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:44.808 ************************************ 00:03:44.808 END TEST odd_alloc 00:03:44.808 ************************************ 00:03:44.808 10:42:14 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:44.808 10:42:14 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:44.808 10:42:14 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:44.808 10:42:14 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:44.808 ************************************ 00:03:44.808 START TEST custom_alloc 00:03:44.808 ************************************ 00:03:44.808 10:42:14 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:03:44.808 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:44.808 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:44.808 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:44.808 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:44.808 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:44.808 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:44.808 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:44.808 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:44.808 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:44.808 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:44.808 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:44.808 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:44.808 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:44.808 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:44.808 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:44.808 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:44.809 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:44.809 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:44.809 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:44.809 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:44.809 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:44.809 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:44.809 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:44.809 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:44.809 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:44.809 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:03:44.809 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:44.809 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:44.809 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:44.809 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:44.809 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:44.809 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:44.809 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:44.809 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:44.809 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:44.809 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:44.809 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:44.809 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:44.809 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:44.809 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:44.809 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:44.809 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:03:44.809 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:44.809 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.809 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:45.067 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:45.067 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:45.067 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:45.355 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:03:45.355 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:45.355 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:45.355 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:45.355 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:45.355 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:45.355 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:45.355 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:45.355 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:45.355 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:45.355 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:45.355 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:45.355 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:45.355 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.355 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.355 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.355 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.355 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.355 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.355 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9134244 kB' 'MemAvailable: 10531880 kB' 'Buffers: 2436 kB' 'Cached: 1612288 kB' 'SwapCached: 0 kB' 'Active: 452476 kB' 'Inactive: 1283416 kB' 'Active(anon): 131632 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1283416 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122732 kB' 'Mapped: 48756 kB' 'Shmem: 10464 kB' 'KReclaimable: 61076 kB' 'Slab: 132772 kB' 'SReclaimable: 61076 kB' 'SUnreclaim: 71696 kB' 'KernelStack: 6488 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.356 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9134496 kB' 'MemAvailable: 10532132 kB' 'Buffers: 2436 kB' 'Cached: 1612288 kB' 'SwapCached: 0 kB' 'Active: 451908 kB' 'Inactive: 1283416 kB' 'Active(anon): 131064 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1283416 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122200 kB' 'Mapped: 48756 kB' 'Shmem: 10464 kB' 'KReclaimable: 61076 kB' 'Slab: 132772 kB' 'SReclaimable: 61076 kB' 'SUnreclaim: 71696 kB' 'KernelStack: 6456 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9134584 kB' 'MemAvailable: 10532220 kB' 'Buffers: 2436 kB' 'Cached: 1612288 kB' 'SwapCached: 0 kB' 'Active: 451840 kB' 'Inactive: 1283416 kB' 'Active(anon): 130996 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1283416 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122420 kB' 'Mapped: 48636 kB' 'Shmem: 10464 kB' 'KReclaimable: 61076 kB' 'Slab: 132776 kB' 'SReclaimable: 61076 kB' 'SUnreclaim: 71700 kB' 'KernelStack: 6432 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.359 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.360 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:45.361 nr_hugepages=512 00:03:45.361 resv_hugepages=0 00:03:45.361 surplus_hugepages=0 00:03:45.361 anon_hugepages=0 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9134332 kB' 'MemAvailable: 10531968 kB' 'Buffers: 2436 kB' 'Cached: 1612288 kB' 'SwapCached: 0 kB' 'Active: 451828 kB' 'Inactive: 1283416 kB' 'Active(anon): 130984 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1283416 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122164 kB' 'Mapped: 48636 kB' 'Shmem: 10464 kB' 'KReclaimable: 61076 kB' 'Slab: 132768 kB' 'SReclaimable: 61076 kB' 'SUnreclaim: 71692 kB' 'KernelStack: 6432 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.361 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.362 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9134332 kB' 'MemUsed: 3107640 kB' 'SwapCached: 0 kB' 'Active: 451844 kB' 'Inactive: 1283416 kB' 'Active(anon): 131000 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1283416 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 1614724 kB' 'Mapped: 48636 kB' 'AnonPages: 122432 kB' 'Shmem: 10464 kB' 'KernelStack: 6432 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61076 kB' 'Slab: 132768 kB' 'SReclaimable: 61076 kB' 'SUnreclaim: 71692 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.363 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.364 10:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:45.364 node0=512 expecting 512 00:03:45.364 ************************************ 00:03:45.364 END TEST custom_alloc 00:03:45.364 ************************************ 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:45.364 00:03:45.364 real 0m0.560s 00:03:45.364 user 0m0.284s 00:03:45.364 sys 0m0.281s 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:45.364 10:42:15 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:45.365 10:42:15 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:45.365 10:42:15 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:45.365 10:42:15 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:45.365 10:42:15 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:45.365 ************************************ 00:03:45.365 START TEST no_shrink_alloc 00:03:45.365 ************************************ 00:03:45.365 10:42:15 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:03:45.365 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:45.365 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:45.365 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:45.365 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:45.365 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:45.365 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:45.365 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:45.365 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:45.365 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:45.365 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:45.365 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.365 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:45.365 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:45.365 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.637 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.637 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:45.637 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:45.637 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:45.637 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:45.637 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:45.637 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.637 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:45.901 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:45.901 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:45.901 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8087048 kB' 'MemAvailable: 9484684 kB' 'Buffers: 2436 kB' 'Cached: 1612288 kB' 'SwapCached: 0 kB' 'Active: 452656 kB' 'Inactive: 1283416 kB' 'Active(anon): 131812 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1283416 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122928 kB' 'Mapped: 48744 kB' 'Shmem: 10464 kB' 'KReclaimable: 61076 kB' 'Slab: 132820 kB' 'SReclaimable: 61076 kB' 'SUnreclaim: 71744 kB' 'KernelStack: 6404 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.901 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:45.902 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8087048 kB' 'MemAvailable: 9484684 kB' 'Buffers: 2436 kB' 'Cached: 1612288 kB' 'SwapCached: 0 kB' 'Active: 451840 kB' 'Inactive: 1283416 kB' 'Active(anon): 130996 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1283416 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122152 kB' 'Mapped: 48616 kB' 'Shmem: 10464 kB' 'KReclaimable: 61076 kB' 'Slab: 132832 kB' 'SReclaimable: 61076 kB' 'SUnreclaim: 71756 kB' 'KernelStack: 6416 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.903 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.904 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8087048 kB' 'MemAvailable: 9484684 kB' 'Buffers: 2436 kB' 'Cached: 1612288 kB' 'SwapCached: 0 kB' 'Active: 452144 kB' 'Inactive: 1283416 kB' 'Active(anon): 131300 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1283416 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122408 kB' 'Mapped: 48616 kB' 'Shmem: 10464 kB' 'KReclaimable: 61076 kB' 'Slab: 132832 kB' 'SReclaimable: 61076 kB' 'SUnreclaim: 71756 kB' 'KernelStack: 6416 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.905 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.906 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:45.907 nr_hugepages=1024 00:03:45.907 resv_hugepages=0 00:03:45.907 surplus_hugepages=0 00:03:45.907 anon_hugepages=0 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8087048 kB' 'MemAvailable: 9484684 kB' 'Buffers: 2436 kB' 'Cached: 1612288 kB' 'SwapCached: 0 kB' 'Active: 451912 kB' 'Inactive: 1283416 kB' 'Active(anon): 131068 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1283416 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122204 kB' 'Mapped: 48616 kB' 'Shmem: 10464 kB' 'KReclaimable: 61076 kB' 'Slab: 132812 kB' 'SReclaimable: 61076 kB' 'SUnreclaim: 71736 kB' 'KernelStack: 6432 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.907 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.908 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8087048 kB' 'MemUsed: 4154924 kB' 'SwapCached: 0 kB' 'Active: 452096 kB' 'Inactive: 1283416 kB' 'Active(anon): 131252 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1283416 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 1614724 kB' 'Mapped: 48616 kB' 'AnonPages: 122356 kB' 'Shmem: 10464 kB' 'KernelStack: 6416 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61076 kB' 'Slab: 132808 kB' 'SReclaimable: 61076 kB' 'SUnreclaim: 71732 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.909 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:45.910 node0=1024 expecting 1024 00:03:45.910 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:45.911 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:45.911 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:45.911 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:45.911 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:45.911 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.911 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:46.484 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:46.484 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:46.484 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:46.484 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:46.484 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:46.484 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:46.484 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:46.484 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:46.484 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:46.484 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:46.484 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:46.484 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:46.484 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:46.484 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:46.484 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:46.484 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:46.484 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.484 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.484 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.484 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.484 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.484 10:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8097012 kB' 'MemAvailable: 9494644 kB' 'Buffers: 2436 kB' 'Cached: 1612288 kB' 'SwapCached: 0 kB' 'Active: 448712 kB' 'Inactive: 1283416 kB' 'Active(anon): 127868 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1283416 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 119016 kB' 'Mapped: 48004 kB' 'Shmem: 10464 kB' 'KReclaimable: 61068 kB' 'Slab: 132676 kB' 'SReclaimable: 61068 kB' 'SUnreclaim: 71608 kB' 'KernelStack: 6408 kB' 'PageTables: 3868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.484 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:46.485 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8097140 kB' 'MemAvailable: 9494772 kB' 'Buffers: 2436 kB' 'Cached: 1612288 kB' 'SwapCached: 0 kB' 'Active: 447832 kB' 'Inactive: 1283416 kB' 'Active(anon): 126988 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1283416 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118352 kB' 'Mapped: 47876 kB' 'Shmem: 10464 kB' 'KReclaimable: 61068 kB' 'Slab: 132672 kB' 'SReclaimable: 61068 kB' 'SUnreclaim: 71604 kB' 'KernelStack: 6320 kB' 'PageTables: 3728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.486 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.487 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8096892 kB' 'MemAvailable: 9494524 kB' 'Buffers: 2436 kB' 'Cached: 1612288 kB' 'SwapCached: 0 kB' 'Active: 447904 kB' 'Inactive: 1283416 kB' 'Active(anon): 127060 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1283416 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118168 kB' 'Mapped: 47876 kB' 'Shmem: 10464 kB' 'KReclaimable: 61068 kB' 'Slab: 132672 kB' 'SReclaimable: 61068 kB' 'SUnreclaim: 71604 kB' 'KernelStack: 6320 kB' 'PageTables: 3728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.488 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.489 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:46.490 nr_hugepages=1024 00:03:46.490 resv_hugepages=0 00:03:46.490 surplus_hugepages=0 00:03:46.490 anon_hugepages=0 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8096892 kB' 'MemAvailable: 9494524 kB' 'Buffers: 2436 kB' 'Cached: 1612288 kB' 'SwapCached: 0 kB' 'Active: 447948 kB' 'Inactive: 1283416 kB' 'Active(anon): 127104 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1283416 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118212 kB' 'Mapped: 47876 kB' 'Shmem: 10464 kB' 'KReclaimable: 61068 kB' 'Slab: 132672 kB' 'SReclaimable: 61068 kB' 'SUnreclaim: 71604 kB' 'KernelStack: 6320 kB' 'PageTables: 3728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.491 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8096892 kB' 'MemUsed: 4145080 kB' 'SwapCached: 0 kB' 'Active: 447668 kB' 'Inactive: 1283416 kB' 'Active(anon): 126824 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1283416 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 1614724 kB' 'Mapped: 47876 kB' 'AnonPages: 118192 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 3680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61068 kB' 'Slab: 132672 kB' 'SReclaimable: 61068 kB' 'SUnreclaim: 71604 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:46.493 node0=1024 expecting 1024 00:03:46.493 ************************************ 00:03:46.493 END TEST no_shrink_alloc 00:03:46.493 ************************************ 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:46.493 00:03:46.493 real 0m1.103s 00:03:46.493 user 0m0.512s 00:03:46.493 sys 0m0.602s 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:46.493 10:42:16 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:46.752 10:42:16 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:46.752 10:42:16 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:46.752 10:42:16 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:46.752 10:42:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:46.752 10:42:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:46.752 10:42:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:46.752 10:42:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:46.752 10:42:16 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:46.752 10:42:16 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:46.752 ************************************ 00:03:46.752 END TEST hugepages 00:03:46.752 ************************************ 00:03:46.752 00:03:46.752 real 0m4.683s 00:03:46.752 user 0m2.213s 00:03:46.752 sys 0m2.495s 00:03:46.752 10:42:16 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:46.752 10:42:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:46.752 10:42:16 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:46.752 10:42:16 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:46.752 10:42:16 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:46.752 10:42:16 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:46.752 ************************************ 00:03:46.752 START TEST driver 00:03:46.752 ************************************ 00:03:46.752 10:42:16 setup.sh.driver -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:46.752 * Looking for test storage... 00:03:46.752 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:46.752 10:42:16 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:46.752 10:42:16 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:46.752 10:42:16 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:47.318 10:42:16 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:47.318 10:42:16 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:47.318 10:42:16 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:47.318 10:42:16 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:47.318 ************************************ 00:03:47.318 START TEST guess_driver 00:03:47.318 ************************************ 00:03:47.318 10:42:16 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:03:47.318 10:42:16 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:47.318 10:42:16 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:47.318 10:42:16 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:47.318 10:42:16 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:47.318 10:42:16 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:47.318 10:42:16 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:47.318 10:42:16 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:47.318 10:42:16 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:47.318 10:42:16 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:03:47.318 10:42:16 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:03:47.318 10:42:16 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:03:47.318 10:42:16 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:03:47.318 10:42:16 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:03:47.318 10:42:16 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:03:47.318 10:42:16 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:03:47.318 10:42:16 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:03:47.318 10:42:16 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:03:47.318 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:03:47.318 10:42:16 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:03:47.318 Looking for driver=uio_pci_generic 00:03:47.318 10:42:16 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:03:47.318 10:42:16 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:47.318 10:42:16 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:03:47.318 10:42:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:47.318 10:42:16 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:47.318 10:42:16 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.318 10:42:16 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:47.886 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:03:47.886 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:03:47.886 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.146 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:48.146 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:48.146 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.146 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:48.146 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:48.146 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.146 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:48.146 10:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:48.146 10:42:17 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:48.146 10:42:17 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:48.713 00:03:48.713 real 0m1.452s 00:03:48.713 user 0m0.516s 00:03:48.713 sys 0m0.917s 00:03:48.713 ************************************ 00:03:48.713 END TEST guess_driver 00:03:48.713 ************************************ 00:03:48.713 10:42:18 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:48.713 10:42:18 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:48.713 00:03:48.713 real 0m2.142s 00:03:48.713 user 0m0.748s 00:03:48.713 sys 0m1.440s 00:03:48.713 ************************************ 00:03:48.713 END TEST driver 00:03:48.713 ************************************ 00:03:48.713 10:42:18 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:48.713 10:42:18 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:48.972 10:42:18 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:48.972 10:42:18 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:48.972 10:42:18 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:48.972 10:42:18 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:48.972 ************************************ 00:03:48.972 START TEST devices 00:03:48.972 ************************************ 00:03:48.972 10:42:18 setup.sh.devices -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:48.972 * Looking for test storage... 00:03:48.972 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:48.972 10:42:18 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:48.972 10:42:18 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:48.972 10:42:18 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:48.972 10:42:18 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:49.926 10:42:19 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:49.926 10:42:19 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:49.926 10:42:19 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:49.926 10:42:19 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:49.926 10:42:19 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:49.926 10:42:19 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:49.926 10:42:19 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:49.926 10:42:19 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:49.926 10:42:19 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:49.926 10:42:19 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:49.926 10:42:19 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:03:49.926 10:42:19 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:03:49.926 10:42:19 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:03:49.926 10:42:19 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:49.926 10:42:19 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:49.926 10:42:19 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:03:49.927 10:42:19 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:03:49.927 10:42:19 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:03:49.927 10:42:19 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:49.927 10:42:19 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:49.927 10:42:19 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:49.927 10:42:19 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:49.927 10:42:19 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:49.927 10:42:19 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:49.927 10:42:19 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:49.927 10:42:19 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:03:49.927 No valid GPT data, bailing 00:03:49.927 10:42:19 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:49.927 10:42:19 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:49.927 10:42:19 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:49.927 10:42:19 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:49.927 10:42:19 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:49.927 10:42:19 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:03:49.927 10:42:19 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:03:49.927 10:42:19 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:03:49.927 No valid GPT data, bailing 00:03:49.927 10:42:19 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:03:49.927 10:42:19 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:49.927 10:42:19 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:03:49.927 10:42:19 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:03:49.927 10:42:19 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:03:49.927 10:42:19 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:03:49.927 10:42:19 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:03:49.927 10:42:19 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:03:49.927 No valid GPT data, bailing 00:03:49.927 10:42:19 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:03:49.927 10:42:19 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:49.927 10:42:19 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:03:49.927 10:42:19 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:03:49.927 10:42:19 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:03:49.927 10:42:19 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:03:49.927 10:42:19 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:03:49.927 10:42:19 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:03:49.927 No valid GPT data, bailing 00:03:49.927 10:42:19 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:49.927 10:42:19 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:49.927 10:42:19 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:03:49.927 10:42:19 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:03:49.927 10:42:19 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:03:49.927 10:42:19 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:49.927 10:42:19 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:49.927 10:42:19 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:49.927 10:42:19 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:49.927 10:42:19 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:49.927 ************************************ 00:03:49.927 START TEST nvme_mount 00:03:49.927 ************************************ 00:03:49.927 10:42:19 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:03:49.927 10:42:19 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:49.927 10:42:19 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:49.927 10:42:19 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:49.927 10:42:19 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:49.927 10:42:19 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:49.927 10:42:19 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:49.927 10:42:19 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:49.927 10:42:19 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:49.927 10:42:19 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:49.927 10:42:19 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:49.927 10:42:19 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:49.927 10:42:19 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:49.927 10:42:19 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:49.927 10:42:19 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:49.927 10:42:19 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:49.927 10:42:19 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:49.927 10:42:19 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:49.927 10:42:19 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:49.927 10:42:19 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:51.302 Creating new GPT entries in memory. 00:03:51.302 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:51.302 other utilities. 00:03:51.302 10:42:20 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:51.302 10:42:20 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:51.302 10:42:20 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:51.302 10:42:20 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:51.302 10:42:20 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:52.236 Creating new GPT entries in memory. 00:03:52.236 The operation has completed successfully. 00:03:52.236 10:42:21 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:52.236 10:42:21 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:52.236 10:42:21 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 57020 00:03:52.236 10:42:21 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:52.236 10:42:21 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:03:52.236 10:42:21 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:52.236 10:42:21 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:52.236 10:42:21 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:52.236 10:42:21 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:52.236 10:42:21 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:52.236 10:42:21 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:52.236 10:42:21 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:52.236 10:42:21 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:52.236 10:42:21 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:52.236 10:42:21 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:52.236 10:42:21 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:52.236 10:42:21 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:52.236 10:42:21 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:52.236 10:42:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.236 10:42:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:52.236 10:42:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:52.236 10:42:21 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.236 10:42:21 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:52.236 10:42:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:52.236 10:42:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:52.236 10:42:21 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:52.236 10:42:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.236 10:42:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:52.236 10:42:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.494 10:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:52.494 10:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.494 10:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:52.494 10:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.753 10:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:52.753 10:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:52.753 10:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:52.753 10:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:52.753 10:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:52.753 10:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:52.753 10:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:52.753 10:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:52.753 10:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:52.753 10:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:52.753 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:52.753 10:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:52.753 10:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:53.011 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:53.011 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:53.011 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:53.011 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:53.011 10:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:03:53.011 10:42:22 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:03:53.011 10:42:22 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.011 10:42:22 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:53.011 10:42:22 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:53.011 10:42:22 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.011 10:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:53.011 10:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:53.011 10:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:53.011 10:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.011 10:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:53.011 10:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:53.011 10:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:53.011 10:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:53.011 10:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:53.011 10:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.011 10:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:53.011 10:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:53.011 10:42:22 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.011 10:42:22 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:53.269 10:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:53.269 10:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:53.269 10:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:53.269 10:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.269 10:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:53.269 10:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.269 10:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:53.269 10:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.528 10:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:53.528 10:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.528 10:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:53.528 10:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:53.528 10:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.528 10:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:53.528 10:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:53.528 10:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.528 10:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:03:53.528 10:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:53.528 10:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:53.528 10:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:53.528 10:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:53.528 10:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:53.528 10:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:53.528 10:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:53.528 10:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.528 10:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:53.528 10:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:53.528 10:42:23 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.528 10:42:23 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:53.786 10:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:53.786 10:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:53.786 10:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:53.786 10:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.786 10:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:53.786 10:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.045 10:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:54.045 10:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.045 10:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:54.045 10:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.045 10:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:54.045 10:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:54.045 10:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:54.045 10:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:54.045 10:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:54.045 10:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:54.045 10:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:54.045 10:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:54.045 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:54.045 00:03:54.045 real 0m4.101s 00:03:54.045 user 0m0.718s 00:03:54.045 sys 0m1.087s 00:03:54.045 10:42:23 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:54.045 10:42:23 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:54.045 ************************************ 00:03:54.045 END TEST nvme_mount 00:03:54.045 ************************************ 00:03:54.045 10:42:23 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:54.045 10:42:23 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:54.045 10:42:23 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:54.045 10:42:23 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:54.303 ************************************ 00:03:54.303 START TEST dm_mount 00:03:54.303 ************************************ 00:03:54.303 10:42:23 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:03:54.303 10:42:23 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:54.303 10:42:23 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:54.303 10:42:23 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:54.303 10:42:23 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:54.303 10:42:23 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:54.303 10:42:23 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:54.303 10:42:23 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:54.303 10:42:23 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:54.303 10:42:23 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:54.303 10:42:23 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:54.303 10:42:23 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:54.303 10:42:23 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:54.303 10:42:23 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:54.303 10:42:23 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:54.303 10:42:23 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:54.303 10:42:23 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:54.303 10:42:23 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:54.303 10:42:23 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:54.303 10:42:23 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:54.303 10:42:23 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:54.303 10:42:23 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:55.235 Creating new GPT entries in memory. 00:03:55.235 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:55.235 other utilities. 00:03:55.235 10:42:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:55.235 10:42:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:55.235 10:42:24 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:55.235 10:42:24 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:55.235 10:42:24 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:56.167 Creating new GPT entries in memory. 00:03:56.167 The operation has completed successfully. 00:03:56.167 10:42:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:56.167 10:42:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:56.167 10:42:25 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:56.167 10:42:25 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:56.167 10:42:25 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:03:57.575 The operation has completed successfully. 00:03:57.575 10:42:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:57.575 10:42:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:57.575 10:42:26 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 57456 00:03:57.575 10:42:26 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:57.575 10:42:26 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:57.575 10:42:26 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:57.575 10:42:26 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:57.575 10:42:26 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:57.576 10:42:26 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:57.576 10:42:26 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:57.576 10:42:26 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:57.576 10:42:26 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:57.576 10:42:26 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:57.576 10:42:26 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:57.576 10:42:26 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:57.576 10:42:26 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:57.576 10:42:26 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:57.576 10:42:26 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:03:57.576 10:42:26 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:57.576 10:42:26 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:57.576 10:42:26 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:57.576 10:42:26 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:57.576 10:42:26 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:57.576 10:42:26 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:57.576 10:42:26 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:57.576 10:42:26 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:57.576 10:42:26 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:57.576 10:42:26 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:57.576 10:42:26 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:57.576 10:42:26 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:57.576 10:42:26 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:57.576 10:42:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.576 10:42:26 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:57.576 10:42:26 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:57.576 10:42:26 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.576 10:42:26 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:57.576 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:57.576 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:57.576 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:57.576 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.576 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:57.576 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.576 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:57.576 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.835 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:57.835 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.835 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:57.835 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:03:57.835 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:57.835 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:57.835 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:57.835 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:57.835 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:57.835 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:57.835 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:57.835 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:57.835 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:57.835 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:57.835 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:57.835 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:57.835 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.835 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:57.835 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:57.835 10:42:27 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.835 10:42:27 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:58.093 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:58.093 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:58.093 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:58.093 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.093 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:58.093 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.093 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:58.093 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.352 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:58.352 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.352 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:58.352 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:58.352 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:58.352 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:58.352 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:58.352 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:58.352 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:58.352 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:58.352 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:58.352 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:58.352 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:58.352 10:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:58.352 00:03:58.352 real 0m4.204s 00:03:58.352 user 0m0.464s 00:03:58.352 sys 0m0.690s 00:03:58.352 10:42:27 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:58.352 10:42:27 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:58.352 ************************************ 00:03:58.352 END TEST dm_mount 00:03:58.352 ************************************ 00:03:58.352 10:42:28 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:58.352 10:42:28 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:58.352 10:42:28 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:58.352 10:42:28 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:58.352 10:42:28 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:58.352 10:42:28 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:58.352 10:42:28 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:58.611 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:58.611 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:58.611 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:58.611 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:58.611 10:42:28 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:58.611 10:42:28 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:58.611 10:42:28 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:58.611 10:42:28 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:58.611 10:42:28 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:58.611 10:42:28 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:58.611 10:42:28 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:58.611 00:03:58.611 real 0m9.864s 00:03:58.611 user 0m1.836s 00:03:58.611 sys 0m2.386s 00:03:58.611 10:42:28 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:58.611 ************************************ 00:03:58.611 10:42:28 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:58.611 END TEST devices 00:03:58.611 ************************************ 00:03:58.870 00:03:58.870 real 0m21.642s 00:03:58.870 user 0m6.874s 00:03:58.870 sys 0m9.084s 00:03:58.870 10:42:28 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:58.870 10:42:28 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:58.870 ************************************ 00:03:58.870 END TEST setup.sh 00:03:58.870 ************************************ 00:03:58.870 10:42:28 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:59.437 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:59.437 Hugepages 00:03:59.437 node hugesize free / total 00:03:59.437 node0 1048576kB 0 / 0 00:03:59.437 node0 2048kB 2048 / 2048 00:03:59.437 00:03:59.437 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:59.437 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:59.695 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:03:59.695 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:03:59.695 10:42:29 -- spdk/autotest.sh@130 -- # uname -s 00:03:59.695 10:42:29 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:59.695 10:42:29 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:59.695 10:42:29 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:00.263 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:00.521 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:00.521 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:00.521 10:42:30 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:01.457 10:42:31 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:01.457 10:42:31 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:01.457 10:42:31 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:01.457 10:42:31 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:01.457 10:42:31 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:01.457 10:42:31 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:01.457 10:42:31 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:01.457 10:42:31 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:01.457 10:42:31 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:01.714 10:42:31 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:01.715 10:42:31 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:01.715 10:42:31 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:01.972 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:01.972 Waiting for block devices as requested 00:04:01.972 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:02.230 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:02.230 10:42:31 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:02.230 10:42:31 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:02.230 10:42:31 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:02.230 10:42:31 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:04:02.230 10:42:31 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:02.230 10:42:31 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:02.230 10:42:31 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:02.230 10:42:31 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:04:02.230 10:42:31 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:04:02.230 10:42:31 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:04:02.230 10:42:31 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:04:02.230 10:42:31 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:02.230 10:42:31 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:02.230 10:42:31 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:02.230 10:42:31 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:02.230 10:42:31 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:02.230 10:42:31 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:04:02.230 10:42:31 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:02.230 10:42:31 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:02.230 10:42:31 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:02.230 10:42:31 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:02.230 10:42:31 -- common/autotest_common.sh@1557 -- # continue 00:04:02.230 10:42:31 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:02.230 10:42:31 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:02.230 10:42:31 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:02.230 10:42:31 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:04:02.230 10:42:31 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:02.230 10:42:31 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:02.230 10:42:31 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:02.230 10:42:31 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:02.230 10:42:31 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:02.230 10:42:31 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:02.230 10:42:31 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:02.230 10:42:31 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:02.230 10:42:31 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:02.230 10:42:31 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:02.230 10:42:31 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:02.230 10:42:31 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:02.230 10:42:31 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:02.230 10:42:31 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:02.230 10:42:31 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:02.230 10:42:31 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:02.230 10:42:31 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:02.230 10:42:31 -- common/autotest_common.sh@1557 -- # continue 00:04:02.230 10:42:31 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:02.230 10:42:31 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:02.230 10:42:31 -- common/autotest_common.sh@10 -- # set +x 00:04:02.230 10:42:31 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:02.230 10:42:31 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:02.230 10:42:31 -- common/autotest_common.sh@10 -- # set +x 00:04:02.230 10:42:31 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:03.166 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:03.166 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.166 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.166 10:42:32 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:03.166 10:42:32 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:03.166 10:42:32 -- common/autotest_common.sh@10 -- # set +x 00:04:03.166 10:42:32 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:03.166 10:42:32 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:03.166 10:42:32 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:03.166 10:42:32 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:03.166 10:42:32 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:03.166 10:42:32 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:03.166 10:42:32 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:03.166 10:42:32 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:03.166 10:42:32 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:03.166 10:42:32 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:03.166 10:42:32 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:03.166 10:42:32 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:03.166 10:42:32 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:03.166 10:42:32 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:03.166 10:42:32 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:03.166 10:42:32 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:03.167 10:42:32 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:03.167 10:42:32 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:03.167 10:42:32 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:03.167 10:42:32 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:03.167 10:42:32 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:03.167 10:42:32 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:03.167 10:42:32 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:03.167 10:42:32 -- common/autotest_common.sh@1593 -- # return 0 00:04:03.167 10:42:32 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:03.167 10:42:32 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:03.167 10:42:32 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:03.167 10:42:32 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:03.167 10:42:32 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:03.167 10:42:32 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:03.167 10:42:32 -- common/autotest_common.sh@10 -- # set +x 00:04:03.167 10:42:32 -- spdk/autotest.sh@164 -- # [[ 1 -eq 1 ]] 00:04:03.167 10:42:32 -- spdk/autotest.sh@165 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:03.167 10:42:32 -- spdk/autotest.sh@165 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:03.167 10:42:32 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:03.167 10:42:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:03.167 10:42:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:03.167 10:42:32 -- common/autotest_common.sh@10 -- # set +x 00:04:03.167 ************************************ 00:04:03.167 START TEST env 00:04:03.167 ************************************ 00:04:03.167 10:42:32 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:03.425 * Looking for test storage... 00:04:03.425 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:03.425 10:42:32 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:03.425 10:42:32 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:03.425 10:42:32 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:03.425 10:42:32 env -- common/autotest_common.sh@10 -- # set +x 00:04:03.425 ************************************ 00:04:03.425 START TEST env_memory 00:04:03.425 ************************************ 00:04:03.425 10:42:32 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:03.425 00:04:03.425 00:04:03.425 CUnit - A unit testing framework for C - Version 2.1-3 00:04:03.425 http://cunit.sourceforge.net/ 00:04:03.425 00:04:03.425 00:04:03.425 Suite: memory 00:04:03.425 Test: alloc and free memory map ...[2024-07-25 10:42:33.032358] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:03.425 passed 00:04:03.426 Test: mem map translation ...[2024-07-25 10:42:33.056251] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:03.426 [2024-07-25 10:42:33.056277] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:03.426 [2024-07-25 10:42:33.056318] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:03.426 [2024-07-25 10:42:33.056327] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:03.426 passed 00:04:03.426 Test: mem map registration ...[2024-07-25 10:42:33.105684] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:03.426 [2024-07-25 10:42:33.105717] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:03.426 passed 00:04:03.685 Test: mem map adjacent registrations ...passed 00:04:03.685 00:04:03.685 Run Summary: Type Total Ran Passed Failed Inactive 00:04:03.685 suites 1 1 n/a 0 0 00:04:03.685 tests 4 4 4 0 0 00:04:03.685 asserts 152 152 152 0 n/a 00:04:03.685 00:04:03.685 Elapsed time = 0.166 seconds 00:04:03.685 00:04:03.685 real 0m0.181s 00:04:03.685 user 0m0.167s 00:04:03.685 sys 0m0.012s 00:04:03.685 10:42:33 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:03.685 10:42:33 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:03.685 ************************************ 00:04:03.685 END TEST env_memory 00:04:03.685 ************************************ 00:04:03.685 10:42:33 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:03.685 10:42:33 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:03.685 10:42:33 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:03.685 10:42:33 env -- common/autotest_common.sh@10 -- # set +x 00:04:03.685 ************************************ 00:04:03.685 START TEST env_vtophys 00:04:03.685 ************************************ 00:04:03.685 10:42:33 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:03.685 EAL: lib.eal log level changed from notice to debug 00:04:03.685 EAL: Detected lcore 0 as core 0 on socket 0 00:04:03.685 EAL: Detected lcore 1 as core 0 on socket 0 00:04:03.685 EAL: Detected lcore 2 as core 0 on socket 0 00:04:03.685 EAL: Detected lcore 3 as core 0 on socket 0 00:04:03.685 EAL: Detected lcore 4 as core 0 on socket 0 00:04:03.685 EAL: Detected lcore 5 as core 0 on socket 0 00:04:03.685 EAL: Detected lcore 6 as core 0 on socket 0 00:04:03.685 EAL: Detected lcore 7 as core 0 on socket 0 00:04:03.685 EAL: Detected lcore 8 as core 0 on socket 0 00:04:03.685 EAL: Detected lcore 9 as core 0 on socket 0 00:04:03.685 EAL: Maximum logical cores by configuration: 128 00:04:03.685 EAL: Detected CPU lcores: 10 00:04:03.685 EAL: Detected NUMA nodes: 1 00:04:03.685 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:03.685 EAL: Detected shared linkage of DPDK 00:04:03.685 EAL: No shared files mode enabled, IPC will be disabled 00:04:03.685 EAL: Selected IOVA mode 'PA' 00:04:03.685 EAL: Probing VFIO support... 00:04:03.685 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:03.685 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:03.685 EAL: Ask a virtual area of 0x2e000 bytes 00:04:03.685 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:03.685 EAL: Setting up physically contiguous memory... 00:04:03.685 EAL: Setting maximum number of open files to 524288 00:04:03.685 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:03.685 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:03.685 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.685 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:03.685 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:03.685 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.685 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:03.685 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:03.685 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.685 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:03.685 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:03.685 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.685 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:03.685 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:03.685 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.685 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:03.685 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:03.685 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.685 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:03.685 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:03.685 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.685 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:03.685 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:03.685 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.685 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:03.685 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:03.685 EAL: Hugepages will be freed exactly as allocated. 00:04:03.685 EAL: No shared files mode enabled, IPC is disabled 00:04:03.685 EAL: No shared files mode enabled, IPC is disabled 00:04:03.685 EAL: TSC frequency is ~2200000 KHz 00:04:03.685 EAL: Main lcore 0 is ready (tid=7f50d1857a00;cpuset=[0]) 00:04:03.685 EAL: Trying to obtain current memory policy. 00:04:03.685 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.685 EAL: Restoring previous memory policy: 0 00:04:03.685 EAL: request: mp_malloc_sync 00:04:03.685 EAL: No shared files mode enabled, IPC is disabled 00:04:03.685 EAL: Heap on socket 0 was expanded by 2MB 00:04:03.685 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:03.685 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:03.685 EAL: Mem event callback 'spdk:(nil)' registered 00:04:03.685 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:03.685 00:04:03.685 00:04:03.685 CUnit - A unit testing framework for C - Version 2.1-3 00:04:03.685 http://cunit.sourceforge.net/ 00:04:03.685 00:04:03.685 00:04:03.685 Suite: components_suite 00:04:03.685 Test: vtophys_malloc_test ...passed 00:04:03.685 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:03.685 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.685 EAL: Restoring previous memory policy: 4 00:04:03.685 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.685 EAL: request: mp_malloc_sync 00:04:03.685 EAL: No shared files mode enabled, IPC is disabled 00:04:03.685 EAL: Heap on socket 0 was expanded by 4MB 00:04:03.685 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.685 EAL: request: mp_malloc_sync 00:04:03.685 EAL: No shared files mode enabled, IPC is disabled 00:04:03.685 EAL: Heap on socket 0 was shrunk by 4MB 00:04:03.685 EAL: Trying to obtain current memory policy. 00:04:03.685 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.685 EAL: Restoring previous memory policy: 4 00:04:03.685 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.685 EAL: request: mp_malloc_sync 00:04:03.685 EAL: No shared files mode enabled, IPC is disabled 00:04:03.685 EAL: Heap on socket 0 was expanded by 6MB 00:04:03.685 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.685 EAL: request: mp_malloc_sync 00:04:03.685 EAL: No shared files mode enabled, IPC is disabled 00:04:03.685 EAL: Heap on socket 0 was shrunk by 6MB 00:04:03.685 EAL: Trying to obtain current memory policy. 00:04:03.685 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.685 EAL: Restoring previous memory policy: 4 00:04:03.685 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.685 EAL: request: mp_malloc_sync 00:04:03.685 EAL: No shared files mode enabled, IPC is disabled 00:04:03.685 EAL: Heap on socket 0 was expanded by 10MB 00:04:03.685 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.685 EAL: request: mp_malloc_sync 00:04:03.685 EAL: No shared files mode enabled, IPC is disabled 00:04:03.685 EAL: Heap on socket 0 was shrunk by 10MB 00:04:03.685 EAL: Trying to obtain current memory policy. 00:04:03.685 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.685 EAL: Restoring previous memory policy: 4 00:04:03.685 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.685 EAL: request: mp_malloc_sync 00:04:03.685 EAL: No shared files mode enabled, IPC is disabled 00:04:03.685 EAL: Heap on socket 0 was expanded by 18MB 00:04:03.685 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.685 EAL: request: mp_malloc_sync 00:04:03.685 EAL: No shared files mode enabled, IPC is disabled 00:04:03.685 EAL: Heap on socket 0 was shrunk by 18MB 00:04:03.685 EAL: Trying to obtain current memory policy. 00:04:03.685 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.685 EAL: Restoring previous memory policy: 4 00:04:03.685 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.685 EAL: request: mp_malloc_sync 00:04:03.685 EAL: No shared files mode enabled, IPC is disabled 00:04:03.685 EAL: Heap on socket 0 was expanded by 34MB 00:04:03.685 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.685 EAL: request: mp_malloc_sync 00:04:03.685 EAL: No shared files mode enabled, IPC is disabled 00:04:03.685 EAL: Heap on socket 0 was shrunk by 34MB 00:04:03.685 EAL: Trying to obtain current memory policy. 00:04:03.685 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.943 EAL: Restoring previous memory policy: 4 00:04:03.943 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.943 EAL: request: mp_malloc_sync 00:04:03.943 EAL: No shared files mode enabled, IPC is disabled 00:04:03.943 EAL: Heap on socket 0 was expanded by 66MB 00:04:03.943 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.943 EAL: request: mp_malloc_sync 00:04:03.943 EAL: No shared files mode enabled, IPC is disabled 00:04:03.943 EAL: Heap on socket 0 was shrunk by 66MB 00:04:03.943 EAL: Trying to obtain current memory policy. 00:04:03.943 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.943 EAL: Restoring previous memory policy: 4 00:04:03.943 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.943 EAL: request: mp_malloc_sync 00:04:03.943 EAL: No shared files mode enabled, IPC is disabled 00:04:03.943 EAL: Heap on socket 0 was expanded by 130MB 00:04:03.943 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.944 EAL: request: mp_malloc_sync 00:04:03.944 EAL: No shared files mode enabled, IPC is disabled 00:04:03.944 EAL: Heap on socket 0 was shrunk by 130MB 00:04:03.944 EAL: Trying to obtain current memory policy. 00:04:03.944 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.944 EAL: Restoring previous memory policy: 4 00:04:03.944 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.944 EAL: request: mp_malloc_sync 00:04:03.944 EAL: No shared files mode enabled, IPC is disabled 00:04:03.944 EAL: Heap on socket 0 was expanded by 258MB 00:04:03.944 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.201 EAL: request: mp_malloc_sync 00:04:04.201 EAL: No shared files mode enabled, IPC is disabled 00:04:04.201 EAL: Heap on socket 0 was shrunk by 258MB 00:04:04.201 EAL: Trying to obtain current memory policy. 00:04:04.201 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.201 EAL: Restoring previous memory policy: 4 00:04:04.201 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.201 EAL: request: mp_malloc_sync 00:04:04.201 EAL: No shared files mode enabled, IPC is disabled 00:04:04.201 EAL: Heap on socket 0 was expanded by 514MB 00:04:04.201 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.458 EAL: request: mp_malloc_sync 00:04:04.458 EAL: No shared files mode enabled, IPC is disabled 00:04:04.458 EAL: Heap on socket 0 was shrunk by 514MB 00:04:04.458 EAL: Trying to obtain current memory policy. 00:04:04.458 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.716 EAL: Restoring previous memory policy: 4 00:04:04.716 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.716 EAL: request: mp_malloc_sync 00:04:04.716 EAL: No shared files mode enabled, IPC is disabled 00:04:04.716 EAL: Heap on socket 0 was expanded by 1026MB 00:04:04.974 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.974 passed 00:04:04.974 00:04:04.974 Run Summary: Type Total Ran Passed Failed Inactive 00:04:04.975 suites 1 1 n/a 0 0 00:04:04.975 tests 2 2 2 0 0 00:04:04.975 asserts 5358 5358 5358 0 n/a 00:04:04.975 00:04:04.975 Elapsed time = 1.284 seconds 00:04:04.975 EAL: request: mp_malloc_sync 00:04:04.975 EAL: No shared files mode enabled, IPC is disabled 00:04:04.975 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:04.975 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.975 EAL: request: mp_malloc_sync 00:04:04.975 EAL: No shared files mode enabled, IPC is disabled 00:04:04.975 EAL: Heap on socket 0 was shrunk by 2MB 00:04:04.975 EAL: No shared files mode enabled, IPC is disabled 00:04:04.975 EAL: No shared files mode enabled, IPC is disabled 00:04:04.975 EAL: No shared files mode enabled, IPC is disabled 00:04:05.233 00:04:05.233 real 0m1.485s 00:04:05.233 user 0m0.820s 00:04:05.233 sys 0m0.525s 00:04:05.233 10:42:34 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:05.233 ************************************ 00:04:05.233 END TEST env_vtophys 00:04:05.233 ************************************ 00:04:05.233 10:42:34 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:05.233 10:42:34 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:05.233 10:42:34 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:05.233 10:42:34 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:05.233 10:42:34 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.233 ************************************ 00:04:05.233 START TEST env_pci 00:04:05.233 ************************************ 00:04:05.233 10:42:34 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:05.233 00:04:05.233 00:04:05.233 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.233 http://cunit.sourceforge.net/ 00:04:05.233 00:04:05.233 00:04:05.233 Suite: pci 00:04:05.233 Test: pci_hook ...[2024-07-25 10:42:34.785115] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58649 has claimed it 00:04:05.233 passed 00:04:05.233 00:04:05.233 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.233 suites 1 1 n/a 0 0 00:04:05.233 tests 1 1 1 0 0 00:04:05.233 asserts 25 25 25 0 n/a 00:04:05.233 00:04:05.233 Elapsed time = 0.002 seconds 00:04:05.233 EAL: Cannot find device (10000:00:01.0) 00:04:05.233 EAL: Failed to attach device on primary process 00:04:05.233 00:04:05.233 real 0m0.022s 00:04:05.233 user 0m0.010s 00:04:05.233 sys 0m0.012s 00:04:05.233 10:42:34 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:05.233 10:42:34 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:05.233 ************************************ 00:04:05.233 END TEST env_pci 00:04:05.233 ************************************ 00:04:05.233 10:42:34 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:05.233 10:42:34 env -- env/env.sh@15 -- # uname 00:04:05.233 10:42:34 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:05.233 10:42:34 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:05.233 10:42:34 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:05.233 10:42:34 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:05.233 10:42:34 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:05.233 10:42:34 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.233 ************************************ 00:04:05.233 START TEST env_dpdk_post_init 00:04:05.233 ************************************ 00:04:05.233 10:42:34 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:05.233 EAL: Detected CPU lcores: 10 00:04:05.233 EAL: Detected NUMA nodes: 1 00:04:05.233 EAL: Detected shared linkage of DPDK 00:04:05.233 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:05.233 EAL: Selected IOVA mode 'PA' 00:04:05.492 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:05.492 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:05.492 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:05.492 Starting DPDK initialization... 00:04:05.492 Starting SPDK post initialization... 00:04:05.492 SPDK NVMe probe 00:04:05.492 Attaching to 0000:00:10.0 00:04:05.492 Attaching to 0000:00:11.0 00:04:05.492 Attached to 0000:00:10.0 00:04:05.492 Attached to 0000:00:11.0 00:04:05.492 Cleaning up... 00:04:05.492 00:04:05.492 real 0m0.187s 00:04:05.492 user 0m0.045s 00:04:05.492 sys 0m0.039s 00:04:05.492 10:42:35 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:05.492 ************************************ 00:04:05.492 END TEST env_dpdk_post_init 00:04:05.492 10:42:35 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:05.492 ************************************ 00:04:05.492 10:42:35 env -- env/env.sh@26 -- # uname 00:04:05.492 10:42:35 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:05.492 10:42:35 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:05.492 10:42:35 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:05.492 10:42:35 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:05.492 10:42:35 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.492 ************************************ 00:04:05.492 START TEST env_mem_callbacks 00:04:05.492 ************************************ 00:04:05.492 10:42:35 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:05.492 EAL: Detected CPU lcores: 10 00:04:05.492 EAL: Detected NUMA nodes: 1 00:04:05.492 EAL: Detected shared linkage of DPDK 00:04:05.492 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:05.492 EAL: Selected IOVA mode 'PA' 00:04:05.492 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:05.492 00:04:05.492 00:04:05.492 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.492 http://cunit.sourceforge.net/ 00:04:05.492 00:04:05.492 00:04:05.492 Suite: memory 00:04:05.492 Test: test ... 00:04:05.492 register 0x200000200000 2097152 00:04:05.492 malloc 3145728 00:04:05.750 register 0x200000400000 4194304 00:04:05.750 buf 0x200000500000 len 3145728 PASSED 00:04:05.750 malloc 64 00:04:05.750 buf 0x2000004fff40 len 64 PASSED 00:04:05.750 malloc 4194304 00:04:05.750 register 0x200000800000 6291456 00:04:05.750 buf 0x200000a00000 len 4194304 PASSED 00:04:05.750 free 0x200000500000 3145728 00:04:05.750 free 0x2000004fff40 64 00:04:05.750 unregister 0x200000400000 4194304 PASSED 00:04:05.750 free 0x200000a00000 4194304 00:04:05.750 unregister 0x200000800000 6291456 PASSED 00:04:05.750 malloc 8388608 00:04:05.750 register 0x200000400000 10485760 00:04:05.750 buf 0x200000600000 len 8388608 PASSED 00:04:05.750 free 0x200000600000 8388608 00:04:05.750 unregister 0x200000400000 10485760 PASSED 00:04:05.750 passed 00:04:05.750 00:04:05.750 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.750 suites 1 1 n/a 0 0 00:04:05.750 tests 1 1 1 0 0 00:04:05.750 asserts 15 15 15 0 n/a 00:04:05.750 00:04:05.750 Elapsed time = 0.009 seconds 00:04:05.750 00:04:05.751 real 0m0.147s 00:04:05.751 user 0m0.016s 00:04:05.751 sys 0m0.030s 00:04:05.751 10:42:35 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:05.751 ************************************ 00:04:05.751 END TEST env_mem_callbacks 00:04:05.751 ************************************ 00:04:05.751 10:42:35 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:05.751 00:04:05.751 real 0m2.387s 00:04:05.751 user 0m1.170s 00:04:05.751 sys 0m0.845s 00:04:05.751 10:42:35 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:05.751 10:42:35 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.751 ************************************ 00:04:05.751 END TEST env 00:04:05.751 ************************************ 00:04:05.751 10:42:35 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:05.751 10:42:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:05.751 10:42:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:05.751 10:42:35 -- common/autotest_common.sh@10 -- # set +x 00:04:05.751 ************************************ 00:04:05.751 START TEST rpc 00:04:05.751 ************************************ 00:04:05.751 10:42:35 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:05.751 * Looking for test storage... 00:04:05.751 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:05.751 10:42:35 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58764 00:04:05.751 10:42:35 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:05.751 10:42:35 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:05.751 10:42:35 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58764 00:04:05.751 10:42:35 rpc -- common/autotest_common.sh@831 -- # '[' -z 58764 ']' 00:04:05.751 10:42:35 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:05.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:05.751 10:42:35 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:05.751 10:42:35 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:05.751 10:42:35 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:05.751 10:42:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.751 [2024-07-25 10:42:35.487613] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:05.751 [2024-07-25 10:42:35.487705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58764 ] 00:04:06.010 [2024-07-25 10:42:35.623761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.010 [2024-07-25 10:42:35.725563] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:06.010 [2024-07-25 10:42:35.725643] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58764' to capture a snapshot of events at runtime. 00:04:06.010 [2024-07-25 10:42:35.725654] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:06.010 [2024-07-25 10:42:35.725662] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:06.010 [2024-07-25 10:42:35.725669] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58764 for offline analysis/debug. 00:04:06.010 [2024-07-25 10:42:35.725700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.269 [2024-07-25 10:42:35.782640] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:06.836 10:42:36 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:06.836 10:42:36 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:06.837 10:42:36 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:06.837 10:42:36 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:06.837 10:42:36 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:06.837 10:42:36 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:06.837 10:42:36 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:06.837 10:42:36 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:06.837 10:42:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.837 ************************************ 00:04:06.837 START TEST rpc_integrity 00:04:06.837 ************************************ 00:04:06.837 10:42:36 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:06.837 10:42:36 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:06.837 10:42:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:06.837 10:42:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.837 10:42:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:06.837 10:42:36 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:06.837 10:42:36 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:06.837 10:42:36 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:06.837 10:42:36 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:06.837 10:42:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:06.837 10:42:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.837 10:42:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:06.837 10:42:36 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:06.837 10:42:36 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:06.837 10:42:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:06.837 10:42:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.837 10:42:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:06.837 10:42:36 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:06.837 { 00:04:06.837 "name": "Malloc0", 00:04:06.837 "aliases": [ 00:04:06.837 "769edc1e-c32a-4ebb-b93e-8beb43eb67ce" 00:04:06.837 ], 00:04:06.837 "product_name": "Malloc disk", 00:04:06.837 "block_size": 512, 00:04:06.837 "num_blocks": 16384, 00:04:06.837 "uuid": "769edc1e-c32a-4ebb-b93e-8beb43eb67ce", 00:04:06.837 "assigned_rate_limits": { 00:04:06.837 "rw_ios_per_sec": 0, 00:04:06.837 "rw_mbytes_per_sec": 0, 00:04:06.837 "r_mbytes_per_sec": 0, 00:04:06.837 "w_mbytes_per_sec": 0 00:04:06.837 }, 00:04:06.837 "claimed": false, 00:04:06.837 "zoned": false, 00:04:06.837 "supported_io_types": { 00:04:06.837 "read": true, 00:04:06.837 "write": true, 00:04:06.837 "unmap": true, 00:04:06.837 "flush": true, 00:04:06.837 "reset": true, 00:04:06.837 "nvme_admin": false, 00:04:06.837 "nvme_io": false, 00:04:06.837 "nvme_io_md": false, 00:04:06.837 "write_zeroes": true, 00:04:06.837 "zcopy": true, 00:04:06.837 "get_zone_info": false, 00:04:06.837 "zone_management": false, 00:04:06.837 "zone_append": false, 00:04:06.837 "compare": false, 00:04:06.837 "compare_and_write": false, 00:04:06.837 "abort": true, 00:04:06.837 "seek_hole": false, 00:04:06.837 "seek_data": false, 00:04:06.837 "copy": true, 00:04:06.837 "nvme_iov_md": false 00:04:06.837 }, 00:04:06.837 "memory_domains": [ 00:04:06.837 { 00:04:06.837 "dma_device_id": "system", 00:04:06.837 "dma_device_type": 1 00:04:06.837 }, 00:04:06.837 { 00:04:06.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:06.837 "dma_device_type": 2 00:04:06.837 } 00:04:06.837 ], 00:04:06.837 "driver_specific": {} 00:04:06.837 } 00:04:06.837 ]' 00:04:06.837 10:42:36 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:07.096 10:42:36 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:07.096 10:42:36 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:07.096 10:42:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.096 10:42:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.096 [2024-07-25 10:42:36.604744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:07.096 [2024-07-25 10:42:36.604817] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:07.096 [2024-07-25 10:42:36.604842] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1aa3da0 00:04:07.096 [2024-07-25 10:42:36.604864] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:07.096 [2024-07-25 10:42:36.606810] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:07.096 [2024-07-25 10:42:36.606872] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:07.096 Passthru0 00:04:07.096 10:42:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.096 10:42:36 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:07.096 10:42:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.096 10:42:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.096 10:42:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.096 10:42:36 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:07.096 { 00:04:07.096 "name": "Malloc0", 00:04:07.096 "aliases": [ 00:04:07.096 "769edc1e-c32a-4ebb-b93e-8beb43eb67ce" 00:04:07.096 ], 00:04:07.096 "product_name": "Malloc disk", 00:04:07.096 "block_size": 512, 00:04:07.096 "num_blocks": 16384, 00:04:07.096 "uuid": "769edc1e-c32a-4ebb-b93e-8beb43eb67ce", 00:04:07.096 "assigned_rate_limits": { 00:04:07.096 "rw_ios_per_sec": 0, 00:04:07.096 "rw_mbytes_per_sec": 0, 00:04:07.096 "r_mbytes_per_sec": 0, 00:04:07.096 "w_mbytes_per_sec": 0 00:04:07.096 }, 00:04:07.096 "claimed": true, 00:04:07.096 "claim_type": "exclusive_write", 00:04:07.096 "zoned": false, 00:04:07.096 "supported_io_types": { 00:04:07.096 "read": true, 00:04:07.096 "write": true, 00:04:07.096 "unmap": true, 00:04:07.096 "flush": true, 00:04:07.096 "reset": true, 00:04:07.096 "nvme_admin": false, 00:04:07.096 "nvme_io": false, 00:04:07.096 "nvme_io_md": false, 00:04:07.096 "write_zeroes": true, 00:04:07.096 "zcopy": true, 00:04:07.096 "get_zone_info": false, 00:04:07.096 "zone_management": false, 00:04:07.096 "zone_append": false, 00:04:07.096 "compare": false, 00:04:07.096 "compare_and_write": false, 00:04:07.096 "abort": true, 00:04:07.096 "seek_hole": false, 00:04:07.096 "seek_data": false, 00:04:07.096 "copy": true, 00:04:07.096 "nvme_iov_md": false 00:04:07.096 }, 00:04:07.096 "memory_domains": [ 00:04:07.096 { 00:04:07.096 "dma_device_id": "system", 00:04:07.096 "dma_device_type": 1 00:04:07.096 }, 00:04:07.096 { 00:04:07.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.096 "dma_device_type": 2 00:04:07.096 } 00:04:07.096 ], 00:04:07.096 "driver_specific": {} 00:04:07.096 }, 00:04:07.096 { 00:04:07.096 "name": "Passthru0", 00:04:07.096 "aliases": [ 00:04:07.096 "dc37c586-cc3b-5bb9-af03-a144182fada2" 00:04:07.096 ], 00:04:07.096 "product_name": "passthru", 00:04:07.096 "block_size": 512, 00:04:07.096 "num_blocks": 16384, 00:04:07.096 "uuid": "dc37c586-cc3b-5bb9-af03-a144182fada2", 00:04:07.096 "assigned_rate_limits": { 00:04:07.096 "rw_ios_per_sec": 0, 00:04:07.096 "rw_mbytes_per_sec": 0, 00:04:07.096 "r_mbytes_per_sec": 0, 00:04:07.096 "w_mbytes_per_sec": 0 00:04:07.096 }, 00:04:07.096 "claimed": false, 00:04:07.096 "zoned": false, 00:04:07.096 "supported_io_types": { 00:04:07.096 "read": true, 00:04:07.096 "write": true, 00:04:07.096 "unmap": true, 00:04:07.096 "flush": true, 00:04:07.096 "reset": true, 00:04:07.096 "nvme_admin": false, 00:04:07.096 "nvme_io": false, 00:04:07.096 "nvme_io_md": false, 00:04:07.096 "write_zeroes": true, 00:04:07.096 "zcopy": true, 00:04:07.096 "get_zone_info": false, 00:04:07.096 "zone_management": false, 00:04:07.096 "zone_append": false, 00:04:07.096 "compare": false, 00:04:07.096 "compare_and_write": false, 00:04:07.096 "abort": true, 00:04:07.096 "seek_hole": false, 00:04:07.096 "seek_data": false, 00:04:07.096 "copy": true, 00:04:07.096 "nvme_iov_md": false 00:04:07.096 }, 00:04:07.096 "memory_domains": [ 00:04:07.096 { 00:04:07.096 "dma_device_id": "system", 00:04:07.096 "dma_device_type": 1 00:04:07.096 }, 00:04:07.096 { 00:04:07.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.096 "dma_device_type": 2 00:04:07.096 } 00:04:07.096 ], 00:04:07.096 "driver_specific": { 00:04:07.096 "passthru": { 00:04:07.096 "name": "Passthru0", 00:04:07.096 "base_bdev_name": "Malloc0" 00:04:07.096 } 00:04:07.096 } 00:04:07.096 } 00:04:07.096 ]' 00:04:07.096 10:42:36 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:07.096 10:42:36 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:07.096 10:42:36 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:07.096 10:42:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.096 10:42:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.096 10:42:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.096 10:42:36 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:07.096 10:42:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.096 10:42:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.096 10:42:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.096 10:42:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:07.096 10:42:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.096 10:42:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.096 10:42:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.096 10:42:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:07.096 10:42:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:07.096 10:42:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:07.096 00:04:07.096 real 0m0.319s 00:04:07.096 user 0m0.214s 00:04:07.096 sys 0m0.034s 00:04:07.096 10:42:36 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.096 ************************************ 00:04:07.096 END TEST rpc_integrity 00:04:07.096 10:42:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.096 ************************************ 00:04:07.096 10:42:36 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:07.096 10:42:36 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.096 10:42:36 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.096 10:42:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.096 ************************************ 00:04:07.096 START TEST rpc_plugins 00:04:07.096 ************************************ 00:04:07.096 10:42:36 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:07.096 10:42:36 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:07.096 10:42:36 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.096 10:42:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:07.355 10:42:36 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.355 10:42:36 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:07.355 10:42:36 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:07.355 10:42:36 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.355 10:42:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:07.355 10:42:36 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.355 10:42:36 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:07.355 { 00:04:07.355 "name": "Malloc1", 00:04:07.355 "aliases": [ 00:04:07.355 "085baac1-72ee-4eb0-b056-2c03415b90bd" 00:04:07.355 ], 00:04:07.355 "product_name": "Malloc disk", 00:04:07.355 "block_size": 4096, 00:04:07.355 "num_blocks": 256, 00:04:07.355 "uuid": "085baac1-72ee-4eb0-b056-2c03415b90bd", 00:04:07.355 "assigned_rate_limits": { 00:04:07.355 "rw_ios_per_sec": 0, 00:04:07.355 "rw_mbytes_per_sec": 0, 00:04:07.355 "r_mbytes_per_sec": 0, 00:04:07.355 "w_mbytes_per_sec": 0 00:04:07.355 }, 00:04:07.356 "claimed": false, 00:04:07.356 "zoned": false, 00:04:07.356 "supported_io_types": { 00:04:07.356 "read": true, 00:04:07.356 "write": true, 00:04:07.356 "unmap": true, 00:04:07.356 "flush": true, 00:04:07.356 "reset": true, 00:04:07.356 "nvme_admin": false, 00:04:07.356 "nvme_io": false, 00:04:07.356 "nvme_io_md": false, 00:04:07.356 "write_zeroes": true, 00:04:07.356 "zcopy": true, 00:04:07.356 "get_zone_info": false, 00:04:07.356 "zone_management": false, 00:04:07.356 "zone_append": false, 00:04:07.356 "compare": false, 00:04:07.356 "compare_and_write": false, 00:04:07.356 "abort": true, 00:04:07.356 "seek_hole": false, 00:04:07.356 "seek_data": false, 00:04:07.356 "copy": true, 00:04:07.356 "nvme_iov_md": false 00:04:07.356 }, 00:04:07.356 "memory_domains": [ 00:04:07.356 { 00:04:07.356 "dma_device_id": "system", 00:04:07.356 "dma_device_type": 1 00:04:07.356 }, 00:04:07.356 { 00:04:07.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.356 "dma_device_type": 2 00:04:07.356 } 00:04:07.356 ], 00:04:07.356 "driver_specific": {} 00:04:07.356 } 00:04:07.356 ]' 00:04:07.356 10:42:36 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:07.356 10:42:36 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:07.356 10:42:36 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:07.356 10:42:36 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.356 10:42:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:07.356 10:42:36 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.356 10:42:36 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:07.356 10:42:36 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.356 10:42:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:07.356 10:42:36 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.356 10:42:36 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:07.356 10:42:36 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:07.356 ************************************ 00:04:07.356 END TEST rpc_plugins 00:04:07.356 ************************************ 00:04:07.356 10:42:36 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:07.356 00:04:07.356 real 0m0.158s 00:04:07.356 user 0m0.105s 00:04:07.356 sys 0m0.019s 00:04:07.356 10:42:36 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.356 10:42:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:07.356 10:42:37 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:07.356 10:42:37 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.356 10:42:37 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.356 10:42:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.356 ************************************ 00:04:07.356 START TEST rpc_trace_cmd_test 00:04:07.356 ************************************ 00:04:07.356 10:42:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:07.356 10:42:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:07.356 10:42:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:07.356 10:42:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.356 10:42:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:07.356 10:42:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.356 10:42:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:07.356 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58764", 00:04:07.356 "tpoint_group_mask": "0x8", 00:04:07.356 "iscsi_conn": { 00:04:07.356 "mask": "0x2", 00:04:07.356 "tpoint_mask": "0x0" 00:04:07.356 }, 00:04:07.356 "scsi": { 00:04:07.356 "mask": "0x4", 00:04:07.356 "tpoint_mask": "0x0" 00:04:07.356 }, 00:04:07.356 "bdev": { 00:04:07.356 "mask": "0x8", 00:04:07.356 "tpoint_mask": "0xffffffffffffffff" 00:04:07.356 }, 00:04:07.356 "nvmf_rdma": { 00:04:07.356 "mask": "0x10", 00:04:07.356 "tpoint_mask": "0x0" 00:04:07.356 }, 00:04:07.356 "nvmf_tcp": { 00:04:07.356 "mask": "0x20", 00:04:07.356 "tpoint_mask": "0x0" 00:04:07.356 }, 00:04:07.356 "ftl": { 00:04:07.356 "mask": "0x40", 00:04:07.356 "tpoint_mask": "0x0" 00:04:07.356 }, 00:04:07.356 "blobfs": { 00:04:07.356 "mask": "0x80", 00:04:07.356 "tpoint_mask": "0x0" 00:04:07.356 }, 00:04:07.356 "dsa": { 00:04:07.356 "mask": "0x200", 00:04:07.356 "tpoint_mask": "0x0" 00:04:07.356 }, 00:04:07.356 "thread": { 00:04:07.356 "mask": "0x400", 00:04:07.356 "tpoint_mask": "0x0" 00:04:07.356 }, 00:04:07.356 "nvme_pcie": { 00:04:07.356 "mask": "0x800", 00:04:07.356 "tpoint_mask": "0x0" 00:04:07.356 }, 00:04:07.356 "iaa": { 00:04:07.356 "mask": "0x1000", 00:04:07.356 "tpoint_mask": "0x0" 00:04:07.356 }, 00:04:07.356 "nvme_tcp": { 00:04:07.356 "mask": "0x2000", 00:04:07.356 "tpoint_mask": "0x0" 00:04:07.356 }, 00:04:07.356 "bdev_nvme": { 00:04:07.356 "mask": "0x4000", 00:04:07.356 "tpoint_mask": "0x0" 00:04:07.356 }, 00:04:07.356 "sock": { 00:04:07.356 "mask": "0x8000", 00:04:07.356 "tpoint_mask": "0x0" 00:04:07.356 } 00:04:07.356 }' 00:04:07.356 10:42:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:07.614 10:42:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:07.614 10:42:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:07.614 10:42:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:07.614 10:42:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:07.614 10:42:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:07.614 10:42:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:07.614 10:42:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:07.614 10:42:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:07.614 ************************************ 00:04:07.614 END TEST rpc_trace_cmd_test 00:04:07.614 ************************************ 00:04:07.614 10:42:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:07.614 00:04:07.614 real 0m0.258s 00:04:07.614 user 0m0.222s 00:04:07.614 sys 0m0.024s 00:04:07.614 10:42:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.614 10:42:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:07.614 10:42:37 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:07.614 10:42:37 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:07.614 10:42:37 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:07.614 10:42:37 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.614 10:42:37 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.614 10:42:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.614 ************************************ 00:04:07.614 START TEST rpc_daemon_integrity 00:04:07.614 ************************************ 00:04:07.614 10:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:07.614 10:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:07.614 10:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.614 10:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.872 10:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.872 10:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:07.872 10:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:07.872 10:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:07.872 10:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:07.872 10:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.872 10:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.872 10:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.872 10:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:07.872 10:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:07.872 10:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.872 10:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.872 10:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.872 10:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:07.872 { 00:04:07.872 "name": "Malloc2", 00:04:07.872 "aliases": [ 00:04:07.872 "eadb7d38-2ead-4d29-9c67-7fb64c4038eb" 00:04:07.872 ], 00:04:07.872 "product_name": "Malloc disk", 00:04:07.872 "block_size": 512, 00:04:07.872 "num_blocks": 16384, 00:04:07.872 "uuid": "eadb7d38-2ead-4d29-9c67-7fb64c4038eb", 00:04:07.872 "assigned_rate_limits": { 00:04:07.872 "rw_ios_per_sec": 0, 00:04:07.872 "rw_mbytes_per_sec": 0, 00:04:07.872 "r_mbytes_per_sec": 0, 00:04:07.872 "w_mbytes_per_sec": 0 00:04:07.872 }, 00:04:07.872 "claimed": false, 00:04:07.872 "zoned": false, 00:04:07.872 "supported_io_types": { 00:04:07.872 "read": true, 00:04:07.872 "write": true, 00:04:07.872 "unmap": true, 00:04:07.872 "flush": true, 00:04:07.872 "reset": true, 00:04:07.872 "nvme_admin": false, 00:04:07.872 "nvme_io": false, 00:04:07.872 "nvme_io_md": false, 00:04:07.872 "write_zeroes": true, 00:04:07.872 "zcopy": true, 00:04:07.872 "get_zone_info": false, 00:04:07.872 "zone_management": false, 00:04:07.872 "zone_append": false, 00:04:07.872 "compare": false, 00:04:07.872 "compare_and_write": false, 00:04:07.872 "abort": true, 00:04:07.872 "seek_hole": false, 00:04:07.872 "seek_data": false, 00:04:07.872 "copy": true, 00:04:07.872 "nvme_iov_md": false 00:04:07.872 }, 00:04:07.872 "memory_domains": [ 00:04:07.872 { 00:04:07.872 "dma_device_id": "system", 00:04:07.872 "dma_device_type": 1 00:04:07.872 }, 00:04:07.872 { 00:04:07.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.872 "dma_device_type": 2 00:04:07.872 } 00:04:07.872 ], 00:04:07.872 "driver_specific": {} 00:04:07.872 } 00:04:07.872 ]' 00:04:07.872 10:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:07.872 10:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:07.872 10:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:07.872 10:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.872 10:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.872 [2024-07-25 10:42:37.489699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:07.872 [2024-07-25 10:42:37.489768] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:07.872 [2024-07-25 10:42:37.489792] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b08be0 00:04:07.872 [2024-07-25 10:42:37.489802] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:07.872 [2024-07-25 10:42:37.491480] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:07.872 [2024-07-25 10:42:37.491517] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:07.872 Passthru0 00:04:07.872 10:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.872 10:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:07.872 10:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.872 10:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.872 10:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.872 10:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:07.872 { 00:04:07.872 "name": "Malloc2", 00:04:07.872 "aliases": [ 00:04:07.872 "eadb7d38-2ead-4d29-9c67-7fb64c4038eb" 00:04:07.872 ], 00:04:07.872 "product_name": "Malloc disk", 00:04:07.872 "block_size": 512, 00:04:07.872 "num_blocks": 16384, 00:04:07.872 "uuid": "eadb7d38-2ead-4d29-9c67-7fb64c4038eb", 00:04:07.872 "assigned_rate_limits": { 00:04:07.872 "rw_ios_per_sec": 0, 00:04:07.872 "rw_mbytes_per_sec": 0, 00:04:07.872 "r_mbytes_per_sec": 0, 00:04:07.872 "w_mbytes_per_sec": 0 00:04:07.872 }, 00:04:07.872 "claimed": true, 00:04:07.872 "claim_type": "exclusive_write", 00:04:07.872 "zoned": false, 00:04:07.872 "supported_io_types": { 00:04:07.873 "read": true, 00:04:07.873 "write": true, 00:04:07.873 "unmap": true, 00:04:07.873 "flush": true, 00:04:07.873 "reset": true, 00:04:07.873 "nvme_admin": false, 00:04:07.873 "nvme_io": false, 00:04:07.873 "nvme_io_md": false, 00:04:07.873 "write_zeroes": true, 00:04:07.873 "zcopy": true, 00:04:07.873 "get_zone_info": false, 00:04:07.873 "zone_management": false, 00:04:07.873 "zone_append": false, 00:04:07.873 "compare": false, 00:04:07.873 "compare_and_write": false, 00:04:07.873 "abort": true, 00:04:07.873 "seek_hole": false, 00:04:07.873 "seek_data": false, 00:04:07.873 "copy": true, 00:04:07.873 "nvme_iov_md": false 00:04:07.873 }, 00:04:07.873 "memory_domains": [ 00:04:07.873 { 00:04:07.873 "dma_device_id": "system", 00:04:07.873 "dma_device_type": 1 00:04:07.873 }, 00:04:07.873 { 00:04:07.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.873 "dma_device_type": 2 00:04:07.873 } 00:04:07.873 ], 00:04:07.873 "driver_specific": {} 00:04:07.873 }, 00:04:07.873 { 00:04:07.873 "name": "Passthru0", 00:04:07.873 "aliases": [ 00:04:07.873 "ef11d9e3-8fee-5909-a062-84cbcedb4a12" 00:04:07.873 ], 00:04:07.873 "product_name": "passthru", 00:04:07.873 "block_size": 512, 00:04:07.873 "num_blocks": 16384, 00:04:07.873 "uuid": "ef11d9e3-8fee-5909-a062-84cbcedb4a12", 00:04:07.873 "assigned_rate_limits": { 00:04:07.873 "rw_ios_per_sec": 0, 00:04:07.873 "rw_mbytes_per_sec": 0, 00:04:07.873 "r_mbytes_per_sec": 0, 00:04:07.873 "w_mbytes_per_sec": 0 00:04:07.873 }, 00:04:07.873 "claimed": false, 00:04:07.873 "zoned": false, 00:04:07.873 "supported_io_types": { 00:04:07.873 "read": true, 00:04:07.873 "write": true, 00:04:07.873 "unmap": true, 00:04:07.873 "flush": true, 00:04:07.873 "reset": true, 00:04:07.873 "nvme_admin": false, 00:04:07.873 "nvme_io": false, 00:04:07.873 "nvme_io_md": false, 00:04:07.873 "write_zeroes": true, 00:04:07.873 "zcopy": true, 00:04:07.873 "get_zone_info": false, 00:04:07.873 "zone_management": false, 00:04:07.873 "zone_append": false, 00:04:07.873 "compare": false, 00:04:07.873 "compare_and_write": false, 00:04:07.873 "abort": true, 00:04:07.873 "seek_hole": false, 00:04:07.873 "seek_data": false, 00:04:07.873 "copy": true, 00:04:07.873 "nvme_iov_md": false 00:04:07.873 }, 00:04:07.873 "memory_domains": [ 00:04:07.873 { 00:04:07.873 "dma_device_id": "system", 00:04:07.873 "dma_device_type": 1 00:04:07.873 }, 00:04:07.873 { 00:04:07.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.873 "dma_device_type": 2 00:04:07.873 } 00:04:07.873 ], 00:04:07.873 "driver_specific": { 00:04:07.873 "passthru": { 00:04:07.873 "name": "Passthru0", 00:04:07.873 "base_bdev_name": "Malloc2" 00:04:07.873 } 00:04:07.873 } 00:04:07.873 } 00:04:07.873 ]' 00:04:07.873 10:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:07.873 10:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:07.873 10:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:07.873 10:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.873 10:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.873 10:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.873 10:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:07.873 10:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.873 10:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.873 10:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.873 10:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:07.873 10:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.873 10:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.873 10:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.873 10:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:07.873 10:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:08.131 ************************************ 00:04:08.131 END TEST rpc_daemon_integrity 00:04:08.131 ************************************ 00:04:08.131 10:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:08.131 00:04:08.131 real 0m0.314s 00:04:08.131 user 0m0.206s 00:04:08.131 sys 0m0.046s 00:04:08.131 10:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:08.131 10:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.131 10:42:37 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:08.131 10:42:37 rpc -- rpc/rpc.sh@84 -- # killprocess 58764 00:04:08.131 10:42:37 rpc -- common/autotest_common.sh@950 -- # '[' -z 58764 ']' 00:04:08.131 10:42:37 rpc -- common/autotest_common.sh@954 -- # kill -0 58764 00:04:08.131 10:42:37 rpc -- common/autotest_common.sh@955 -- # uname 00:04:08.131 10:42:37 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:08.131 10:42:37 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58764 00:04:08.131 killing process with pid 58764 00:04:08.131 10:42:37 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:08.131 10:42:37 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:08.131 10:42:37 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58764' 00:04:08.131 10:42:37 rpc -- common/autotest_common.sh@969 -- # kill 58764 00:04:08.131 10:42:37 rpc -- common/autotest_common.sh@974 -- # wait 58764 00:04:08.389 ************************************ 00:04:08.389 END TEST rpc 00:04:08.389 ************************************ 00:04:08.389 00:04:08.389 real 0m2.787s 00:04:08.389 user 0m3.586s 00:04:08.389 sys 0m0.686s 00:04:08.389 10:42:38 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:08.389 10:42:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.648 10:42:38 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:08.648 10:42:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:08.648 10:42:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:08.648 10:42:38 -- common/autotest_common.sh@10 -- # set +x 00:04:08.648 ************************************ 00:04:08.648 START TEST skip_rpc 00:04:08.648 ************************************ 00:04:08.648 10:42:38 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:08.648 * Looking for test storage... 00:04:08.648 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:08.648 10:42:38 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:08.648 10:42:38 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:08.648 10:42:38 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:08.648 10:42:38 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:08.648 10:42:38 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:08.648 10:42:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.648 ************************************ 00:04:08.648 START TEST skip_rpc 00:04:08.648 ************************************ 00:04:08.648 10:42:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:08.648 10:42:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58962 00:04:08.648 10:42:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:08.648 10:42:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:08.648 10:42:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:08.648 [2024-07-25 10:42:38.357016] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:08.648 [2024-07-25 10:42:38.357183] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58962 ] 00:04:08.906 [2024-07-25 10:42:38.504470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.906 [2024-07-25 10:42:38.619980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.164 [2024-07-25 10:42:38.677307] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:14.477 10:42:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:14.477 10:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:14.477 10:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:14.477 10:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:14.477 10:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:14.477 10:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:14.477 10:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:14.477 10:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:14.477 10:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.477 10:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.477 10:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:14.477 10:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:14.477 10:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:14.477 10:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:14.477 10:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:14.477 10:42:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:14.477 10:42:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58962 00:04:14.477 10:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 58962 ']' 00:04:14.477 10:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 58962 00:04:14.477 10:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:14.477 10:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:14.477 10:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58962 00:04:14.477 10:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:14.477 10:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:14.477 killing process with pid 58962 00:04:14.477 10:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58962' 00:04:14.477 10:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 58962 00:04:14.477 10:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 58962 00:04:14.477 00:04:14.477 real 0m5.575s 00:04:14.477 user 0m5.173s 00:04:14.477 sys 0m0.305s 00:04:14.477 10:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:14.477 10:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.477 ************************************ 00:04:14.477 END TEST skip_rpc 00:04:14.477 ************************************ 00:04:14.477 10:42:43 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:14.477 10:42:43 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:14.477 10:42:43 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:14.477 10:42:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.477 ************************************ 00:04:14.477 START TEST skip_rpc_with_json 00:04:14.477 ************************************ 00:04:14.477 10:42:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:14.477 10:42:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:14.477 10:42:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59043 00:04:14.477 10:42:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:14.477 10:42:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:14.477 10:42:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59043 00:04:14.477 10:42:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 59043 ']' 00:04:14.477 10:42:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:14.477 10:42:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:14.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:14.477 10:42:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:14.477 10:42:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:14.477 10:42:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:14.477 [2024-07-25 10:42:43.971431] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:14.477 [2024-07-25 10:42:43.971604] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59043 ] 00:04:14.477 [2024-07-25 10:42:44.114851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.736 [2024-07-25 10:42:44.263530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.736 [2024-07-25 10:42:44.338438] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:15.303 10:42:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:15.303 10:42:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:15.303 10:42:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:15.303 10:42:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.303 10:42:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:15.303 [2024-07-25 10:42:44.938048] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:15.303 request: 00:04:15.303 { 00:04:15.303 "trtype": "tcp", 00:04:15.303 "method": "nvmf_get_transports", 00:04:15.303 "req_id": 1 00:04:15.303 } 00:04:15.303 Got JSON-RPC error response 00:04:15.303 response: 00:04:15.303 { 00:04:15.303 "code": -19, 00:04:15.303 "message": "No such device" 00:04:15.303 } 00:04:15.303 10:42:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:15.303 10:42:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:15.303 10:42:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.303 10:42:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:15.303 [2024-07-25 10:42:44.950160] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:15.303 10:42:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.303 10:42:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:15.303 10:42:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.303 10:42:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:15.563 10:42:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.563 10:42:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:15.563 { 00:04:15.563 "subsystems": [ 00:04:15.563 { 00:04:15.563 "subsystem": "keyring", 00:04:15.563 "config": [] 00:04:15.563 }, 00:04:15.563 { 00:04:15.563 "subsystem": "iobuf", 00:04:15.563 "config": [ 00:04:15.563 { 00:04:15.563 "method": "iobuf_set_options", 00:04:15.563 "params": { 00:04:15.563 "small_pool_count": 8192, 00:04:15.563 "large_pool_count": 1024, 00:04:15.563 "small_bufsize": 8192, 00:04:15.563 "large_bufsize": 135168 00:04:15.563 } 00:04:15.563 } 00:04:15.563 ] 00:04:15.563 }, 00:04:15.563 { 00:04:15.563 "subsystem": "sock", 00:04:15.563 "config": [ 00:04:15.563 { 00:04:15.563 "method": "sock_set_default_impl", 00:04:15.563 "params": { 00:04:15.563 "impl_name": "uring" 00:04:15.563 } 00:04:15.563 }, 00:04:15.563 { 00:04:15.563 "method": "sock_impl_set_options", 00:04:15.563 "params": { 00:04:15.563 "impl_name": "ssl", 00:04:15.563 "recv_buf_size": 4096, 00:04:15.563 "send_buf_size": 4096, 00:04:15.563 "enable_recv_pipe": true, 00:04:15.563 "enable_quickack": false, 00:04:15.563 "enable_placement_id": 0, 00:04:15.563 "enable_zerocopy_send_server": true, 00:04:15.563 "enable_zerocopy_send_client": false, 00:04:15.563 "zerocopy_threshold": 0, 00:04:15.563 "tls_version": 0, 00:04:15.563 "enable_ktls": false 00:04:15.563 } 00:04:15.563 }, 00:04:15.563 { 00:04:15.563 "method": "sock_impl_set_options", 00:04:15.563 "params": { 00:04:15.563 "impl_name": "posix", 00:04:15.563 "recv_buf_size": 2097152, 00:04:15.563 "send_buf_size": 2097152, 00:04:15.563 "enable_recv_pipe": true, 00:04:15.563 "enable_quickack": false, 00:04:15.563 "enable_placement_id": 0, 00:04:15.563 "enable_zerocopy_send_server": true, 00:04:15.563 "enable_zerocopy_send_client": false, 00:04:15.563 "zerocopy_threshold": 0, 00:04:15.563 "tls_version": 0, 00:04:15.563 "enable_ktls": false 00:04:15.563 } 00:04:15.563 }, 00:04:15.563 { 00:04:15.563 "method": "sock_impl_set_options", 00:04:15.563 "params": { 00:04:15.563 "impl_name": "uring", 00:04:15.563 "recv_buf_size": 2097152, 00:04:15.563 "send_buf_size": 2097152, 00:04:15.563 "enable_recv_pipe": true, 00:04:15.563 "enable_quickack": false, 00:04:15.563 "enable_placement_id": 0, 00:04:15.563 "enable_zerocopy_send_server": false, 00:04:15.563 "enable_zerocopy_send_client": false, 00:04:15.563 "zerocopy_threshold": 0, 00:04:15.563 "tls_version": 0, 00:04:15.563 "enable_ktls": false 00:04:15.563 } 00:04:15.563 } 00:04:15.563 ] 00:04:15.563 }, 00:04:15.563 { 00:04:15.563 "subsystem": "vmd", 00:04:15.563 "config": [] 00:04:15.563 }, 00:04:15.563 { 00:04:15.563 "subsystem": "accel", 00:04:15.563 "config": [ 00:04:15.563 { 00:04:15.563 "method": "accel_set_options", 00:04:15.563 "params": { 00:04:15.563 "small_cache_size": 128, 00:04:15.563 "large_cache_size": 16, 00:04:15.563 "task_count": 2048, 00:04:15.563 "sequence_count": 2048, 00:04:15.563 "buf_count": 2048 00:04:15.563 } 00:04:15.563 } 00:04:15.563 ] 00:04:15.563 }, 00:04:15.563 { 00:04:15.563 "subsystem": "bdev", 00:04:15.563 "config": [ 00:04:15.563 { 00:04:15.563 "method": "bdev_set_options", 00:04:15.563 "params": { 00:04:15.563 "bdev_io_pool_size": 65535, 00:04:15.563 "bdev_io_cache_size": 256, 00:04:15.563 "bdev_auto_examine": true, 00:04:15.563 "iobuf_small_cache_size": 128, 00:04:15.563 "iobuf_large_cache_size": 16 00:04:15.563 } 00:04:15.563 }, 00:04:15.563 { 00:04:15.563 "method": "bdev_raid_set_options", 00:04:15.563 "params": { 00:04:15.563 "process_window_size_kb": 1024, 00:04:15.563 "process_max_bandwidth_mb_sec": 0 00:04:15.563 } 00:04:15.563 }, 00:04:15.563 { 00:04:15.563 "method": "bdev_iscsi_set_options", 00:04:15.563 "params": { 00:04:15.563 "timeout_sec": 30 00:04:15.563 } 00:04:15.563 }, 00:04:15.563 { 00:04:15.563 "method": "bdev_nvme_set_options", 00:04:15.563 "params": { 00:04:15.563 "action_on_timeout": "none", 00:04:15.563 "timeout_us": 0, 00:04:15.563 "timeout_admin_us": 0, 00:04:15.563 "keep_alive_timeout_ms": 10000, 00:04:15.563 "arbitration_burst": 0, 00:04:15.563 "low_priority_weight": 0, 00:04:15.563 "medium_priority_weight": 0, 00:04:15.563 "high_priority_weight": 0, 00:04:15.563 "nvme_adminq_poll_period_us": 10000, 00:04:15.563 "nvme_ioq_poll_period_us": 0, 00:04:15.563 "io_queue_requests": 0, 00:04:15.563 "delay_cmd_submit": true, 00:04:15.563 "transport_retry_count": 4, 00:04:15.563 "bdev_retry_count": 3, 00:04:15.563 "transport_ack_timeout": 0, 00:04:15.563 "ctrlr_loss_timeout_sec": 0, 00:04:15.563 "reconnect_delay_sec": 0, 00:04:15.563 "fast_io_fail_timeout_sec": 0, 00:04:15.563 "disable_auto_failback": false, 00:04:15.563 "generate_uuids": false, 00:04:15.563 "transport_tos": 0, 00:04:15.563 "nvme_error_stat": false, 00:04:15.563 "rdma_srq_size": 0, 00:04:15.563 "io_path_stat": false, 00:04:15.563 "allow_accel_sequence": false, 00:04:15.563 "rdma_max_cq_size": 0, 00:04:15.563 "rdma_cm_event_timeout_ms": 0, 00:04:15.563 "dhchap_digests": [ 00:04:15.563 "sha256", 00:04:15.563 "sha384", 00:04:15.563 "sha512" 00:04:15.563 ], 00:04:15.563 "dhchap_dhgroups": [ 00:04:15.563 "null", 00:04:15.563 "ffdhe2048", 00:04:15.563 "ffdhe3072", 00:04:15.563 "ffdhe4096", 00:04:15.563 "ffdhe6144", 00:04:15.563 "ffdhe8192" 00:04:15.563 ] 00:04:15.563 } 00:04:15.563 }, 00:04:15.563 { 00:04:15.563 "method": "bdev_nvme_set_hotplug", 00:04:15.563 "params": { 00:04:15.563 "period_us": 100000, 00:04:15.563 "enable": false 00:04:15.563 } 00:04:15.563 }, 00:04:15.563 { 00:04:15.563 "method": "bdev_wait_for_examine" 00:04:15.563 } 00:04:15.563 ] 00:04:15.563 }, 00:04:15.563 { 00:04:15.563 "subsystem": "scsi", 00:04:15.563 "config": null 00:04:15.563 }, 00:04:15.563 { 00:04:15.563 "subsystem": "scheduler", 00:04:15.563 "config": [ 00:04:15.563 { 00:04:15.563 "method": "framework_set_scheduler", 00:04:15.563 "params": { 00:04:15.563 "name": "static" 00:04:15.563 } 00:04:15.563 } 00:04:15.563 ] 00:04:15.563 }, 00:04:15.563 { 00:04:15.563 "subsystem": "vhost_scsi", 00:04:15.563 "config": [] 00:04:15.563 }, 00:04:15.563 { 00:04:15.563 "subsystem": "vhost_blk", 00:04:15.563 "config": [] 00:04:15.563 }, 00:04:15.563 { 00:04:15.563 "subsystem": "ublk", 00:04:15.563 "config": [] 00:04:15.563 }, 00:04:15.563 { 00:04:15.563 "subsystem": "nbd", 00:04:15.563 "config": [] 00:04:15.563 }, 00:04:15.563 { 00:04:15.563 "subsystem": "nvmf", 00:04:15.563 "config": [ 00:04:15.563 { 00:04:15.563 "method": "nvmf_set_config", 00:04:15.563 "params": { 00:04:15.563 "discovery_filter": "match_any", 00:04:15.563 "admin_cmd_passthru": { 00:04:15.563 "identify_ctrlr": false 00:04:15.563 } 00:04:15.563 } 00:04:15.563 }, 00:04:15.563 { 00:04:15.563 "method": "nvmf_set_max_subsystems", 00:04:15.563 "params": { 00:04:15.563 "max_subsystems": 1024 00:04:15.563 } 00:04:15.563 }, 00:04:15.563 { 00:04:15.563 "method": "nvmf_set_crdt", 00:04:15.563 "params": { 00:04:15.563 "crdt1": 0, 00:04:15.563 "crdt2": 0, 00:04:15.563 "crdt3": 0 00:04:15.563 } 00:04:15.563 }, 00:04:15.563 { 00:04:15.563 "method": "nvmf_create_transport", 00:04:15.563 "params": { 00:04:15.563 "trtype": "TCP", 00:04:15.563 "max_queue_depth": 128, 00:04:15.563 "max_io_qpairs_per_ctrlr": 127, 00:04:15.563 "in_capsule_data_size": 4096, 00:04:15.563 "max_io_size": 131072, 00:04:15.563 "io_unit_size": 131072, 00:04:15.563 "max_aq_depth": 128, 00:04:15.563 "num_shared_buffers": 511, 00:04:15.563 "buf_cache_size": 4294967295, 00:04:15.563 "dif_insert_or_strip": false, 00:04:15.563 "zcopy": false, 00:04:15.563 "c2h_success": true, 00:04:15.564 "sock_priority": 0, 00:04:15.564 "abort_timeout_sec": 1, 00:04:15.564 "ack_timeout": 0, 00:04:15.564 "data_wr_pool_size": 0 00:04:15.564 } 00:04:15.564 } 00:04:15.564 ] 00:04:15.564 }, 00:04:15.564 { 00:04:15.564 "subsystem": "iscsi", 00:04:15.564 "config": [ 00:04:15.564 { 00:04:15.564 "method": "iscsi_set_options", 00:04:15.564 "params": { 00:04:15.564 "node_base": "iqn.2016-06.io.spdk", 00:04:15.564 "max_sessions": 128, 00:04:15.564 "max_connections_per_session": 2, 00:04:15.564 "max_queue_depth": 64, 00:04:15.564 "default_time2wait": 2, 00:04:15.564 "default_time2retain": 20, 00:04:15.564 "first_burst_length": 8192, 00:04:15.564 "immediate_data": true, 00:04:15.564 "allow_duplicated_isid": false, 00:04:15.564 "error_recovery_level": 0, 00:04:15.564 "nop_timeout": 60, 00:04:15.564 "nop_in_interval": 30, 00:04:15.564 "disable_chap": false, 00:04:15.564 "require_chap": false, 00:04:15.564 "mutual_chap": false, 00:04:15.564 "chap_group": 0, 00:04:15.564 "max_large_datain_per_connection": 64, 00:04:15.564 "max_r2t_per_connection": 4, 00:04:15.564 "pdu_pool_size": 36864, 00:04:15.564 "immediate_data_pool_size": 16384, 00:04:15.564 "data_out_pool_size": 2048 00:04:15.564 } 00:04:15.564 } 00:04:15.564 ] 00:04:15.564 } 00:04:15.564 ] 00:04:15.564 } 00:04:15.564 10:42:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:15.564 10:42:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59043 00:04:15.564 10:42:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 59043 ']' 00:04:15.564 10:42:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 59043 00:04:15.564 10:42:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:15.564 10:42:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:15.564 10:42:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59043 00:04:15.564 10:42:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:15.564 10:42:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:15.564 10:42:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59043' 00:04:15.564 killing process with pid 59043 00:04:15.564 10:42:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 59043 00:04:15.564 10:42:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 59043 00:04:16.131 10:42:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59076 00:04:16.131 10:42:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:16.131 10:42:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:21.399 10:42:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59076 00:04:21.399 10:42:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 59076 ']' 00:04:21.399 10:42:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 59076 00:04:21.399 10:42:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:21.399 10:42:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:21.399 10:42:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59076 00:04:21.399 10:42:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:21.399 10:42:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:21.399 killing process with pid 59076 00:04:21.399 10:42:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59076' 00:04:21.399 10:42:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 59076 00:04:21.399 10:42:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 59076 00:04:21.658 10:42:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:21.658 10:42:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:21.658 00:04:21.658 real 0m7.389s 00:04:21.658 user 0m6.923s 00:04:21.658 sys 0m0.853s 00:04:21.658 10:42:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:21.658 10:42:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:21.658 ************************************ 00:04:21.658 END TEST skip_rpc_with_json 00:04:21.658 ************************************ 00:04:21.658 10:42:51 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:21.658 10:42:51 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:21.658 10:42:51 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:21.658 10:42:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.658 ************************************ 00:04:21.658 START TEST skip_rpc_with_delay 00:04:21.658 ************************************ 00:04:21.658 10:42:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:21.658 10:42:51 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:21.658 10:42:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:21.658 10:42:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:21.658 10:42:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:21.658 10:42:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:21.658 10:42:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:21.658 10:42:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:21.658 10:42:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:21.658 10:42:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:21.658 10:42:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:21.658 10:42:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:21.658 10:42:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:21.920 [2024-07-25 10:42:51.398515] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:21.920 [2024-07-25 10:42:51.398660] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:21.920 10:42:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:21.920 10:42:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:21.920 10:42:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:21.920 10:42:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:21.920 00:04:21.920 real 0m0.077s 00:04:21.920 user 0m0.054s 00:04:21.920 sys 0m0.022s 00:04:21.920 10:42:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:21.920 10:42:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:21.920 ************************************ 00:04:21.920 END TEST skip_rpc_with_delay 00:04:21.920 ************************************ 00:04:21.920 10:42:51 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:21.920 10:42:51 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:21.920 10:42:51 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:21.920 10:42:51 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:21.920 10:42:51 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:21.920 10:42:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.920 ************************************ 00:04:21.920 START TEST exit_on_failed_rpc_init 00:04:21.920 ************************************ 00:04:21.920 10:42:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:21.920 10:42:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59184 00:04:21.920 10:42:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:21.920 10:42:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59184 00:04:21.920 10:42:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 59184 ']' 00:04:21.920 10:42:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:21.920 10:42:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:21.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:21.920 10:42:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:21.920 10:42:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:21.920 10:42:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:21.920 [2024-07-25 10:42:51.542774] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:21.920 [2024-07-25 10:42:51.542905] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59184 ] 00:04:22.179 [2024-07-25 10:42:51.683997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.179 [2024-07-25 10:42:51.829727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.179 [2024-07-25 10:42:51.902093] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:23.118 10:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:23.118 10:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:23.118 10:42:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:23.118 10:42:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:23.118 10:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:23.118 10:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:23.118 10:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:23.118 10:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:23.118 10:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:23.118 10:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:23.118 10:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:23.118 10:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:23.118 10:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:23.118 10:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:23.118 10:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:23.118 [2024-07-25 10:42:52.605909] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:23.118 [2024-07-25 10:42:52.606016] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59209 ] 00:04:23.118 [2024-07-25 10:42:52.746691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.376 [2024-07-25 10:42:52.876747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:23.376 [2024-07-25 10:42:52.876893] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:23.376 [2024-07-25 10:42:52.876911] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:23.376 [2024-07-25 10:42:52.876922] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:23.376 10:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:23.376 10:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:23.376 10:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:23.376 10:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:23.376 10:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:23.376 10:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:23.376 10:42:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:23.376 10:42:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59184 00:04:23.376 10:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 59184 ']' 00:04:23.376 10:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 59184 00:04:23.376 10:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:23.376 10:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:23.376 10:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59184 00:04:23.376 10:42:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:23.376 10:42:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:23.376 killing process with pid 59184 00:04:23.376 10:42:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59184' 00:04:23.376 10:42:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 59184 00:04:23.376 10:42:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 59184 00:04:23.942 00:04:23.942 real 0m2.068s 00:04:23.942 user 0m2.373s 00:04:23.942 sys 0m0.489s 00:04:23.942 10:42:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:23.942 10:42:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:23.942 ************************************ 00:04:23.942 END TEST exit_on_failed_rpc_init 00:04:23.942 ************************************ 00:04:23.942 10:42:53 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:23.942 00:04:23.942 real 0m15.407s 00:04:23.942 user 0m14.621s 00:04:23.942 sys 0m1.852s 00:04:23.942 10:42:53 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:23.942 10:42:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.943 ************************************ 00:04:23.943 END TEST skip_rpc 00:04:23.943 ************************************ 00:04:23.943 10:42:53 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:23.943 10:42:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:23.943 10:42:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:23.943 10:42:53 -- common/autotest_common.sh@10 -- # set +x 00:04:23.943 ************************************ 00:04:23.943 START TEST rpc_client 00:04:23.943 ************************************ 00:04:23.943 10:42:53 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:24.201 * Looking for test storage... 00:04:24.201 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:24.201 10:42:53 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:24.201 OK 00:04:24.201 10:42:53 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:24.201 00:04:24.201 real 0m0.110s 00:04:24.201 user 0m0.051s 00:04:24.201 sys 0m0.062s 00:04:24.201 10:42:53 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:24.201 ************************************ 00:04:24.201 END TEST rpc_client 00:04:24.201 10:42:53 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:24.201 ************************************ 00:04:24.201 10:42:53 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:24.201 10:42:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:24.201 10:42:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:24.201 10:42:53 -- common/autotest_common.sh@10 -- # set +x 00:04:24.201 ************************************ 00:04:24.201 START TEST json_config 00:04:24.201 ************************************ 00:04:24.201 10:42:53 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:24.201 10:42:53 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:24.201 10:42:53 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:24.201 10:42:53 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:24.201 10:42:53 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:24.201 10:42:53 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:24.201 10:42:53 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:24.201 10:42:53 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:24.201 10:42:53 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:24.201 10:42:53 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:24.201 10:42:53 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:24.201 10:42:53 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:24.201 10:42:53 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:24.201 10:42:53 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:04:24.201 10:42:53 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=bb4b8bd3-cfb4-4368-bf29-91254747069c 00:04:24.201 10:42:53 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:24.201 10:42:53 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:24.201 10:42:53 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:24.201 10:42:53 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:24.201 10:42:53 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:24.201 10:42:53 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:24.201 10:42:53 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:24.201 10:42:53 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:24.201 10:42:53 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.201 10:42:53 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.201 10:42:53 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.201 10:42:53 json_config -- paths/export.sh@5 -- # export PATH 00:04:24.201 10:42:53 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.201 10:42:53 json_config -- nvmf/common.sh@47 -- # : 0 00:04:24.201 10:42:53 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:24.201 10:42:53 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:24.201 10:42:53 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:24.201 10:42:53 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:24.201 10:42:53 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:24.201 10:42:53 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:24.201 10:42:53 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:24.201 10:42:53 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:24.201 10:42:53 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:24.201 10:42:53 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:24.201 10:42:53 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:24.201 10:42:53 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:24.201 10:42:53 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:24.201 10:42:53 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:24.201 10:42:53 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:24.201 10:42:53 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:24.201 10:42:53 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:24.201 10:42:53 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:24.201 10:42:53 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:24.201 10:42:53 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:24.201 10:42:53 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:24.201 10:42:53 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:24.201 10:42:53 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:24.201 INFO: JSON configuration test init 00:04:24.201 10:42:53 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:04:24.201 10:42:53 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:04:24.201 10:42:53 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:04:24.201 10:42:53 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:24.201 10:42:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.201 10:42:53 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:04:24.201 10:42:53 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:24.201 10:42:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.201 10:42:53 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:04:24.201 10:42:53 json_config -- json_config/common.sh@9 -- # local app=target 00:04:24.201 10:42:53 json_config -- json_config/common.sh@10 -- # shift 00:04:24.201 10:42:53 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:24.201 10:42:53 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:24.202 10:42:53 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:24.202 10:42:53 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:24.202 10:42:53 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:24.202 10:42:53 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59327 00:04:24.202 Waiting for target to run... 00:04:24.202 10:42:53 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:24.202 10:42:53 json_config -- json_config/common.sh@25 -- # waitforlisten 59327 /var/tmp/spdk_tgt.sock 00:04:24.202 10:42:53 json_config -- common/autotest_common.sh@831 -- # '[' -z 59327 ']' 00:04:24.202 10:42:53 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:24.202 10:42:53 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:24.202 10:42:53 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:24.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:24.202 10:42:53 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:24.202 10:42:53 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:24.202 10:42:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.459 [2024-07-25 10:42:53.954396] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:24.459 [2024-07-25 10:42:53.954514] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59327 ] 00:04:24.717 [2024-07-25 10:42:54.385235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.976 [2024-07-25 10:42:54.497132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.234 10:42:54 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:25.234 00:04:25.234 10:42:54 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:25.234 10:42:54 json_config -- json_config/common.sh@26 -- # echo '' 00:04:25.234 10:42:54 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:04:25.234 10:42:54 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:04:25.234 10:42:54 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:25.234 10:42:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.234 10:42:54 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:04:25.235 10:42:54 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:04:25.235 10:42:54 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:25.235 10:42:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.494 10:42:55 json_config -- json_config/json_config.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:25.494 10:42:55 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:04:25.494 10:42:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:25.761 [2024-07-25 10:42:55.277269] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:25.762 10:42:55 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:04:25.762 10:42:55 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:25.762 10:42:55 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:25.762 10:42:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.762 10:42:55 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:25.762 10:42:55 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:25.762 10:42:55 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:25.762 10:42:55 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:25.762 10:42:55 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:25.762 10:42:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:26.330 10:42:55 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:26.330 10:42:55 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:26.330 10:42:55 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:04:26.330 10:42:55 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:04:26.330 10:42:55 json_config -- json_config/json_config.sh@51 -- # sort 00:04:26.330 10:42:55 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:04:26.330 10:42:55 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:04:26.330 10:42:55 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:04:26.330 10:42:55 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:04:26.330 10:42:55 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:04:26.330 10:42:55 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:26.330 10:42:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.330 10:42:55 json_config -- json_config/json_config.sh@59 -- # return 0 00:04:26.330 10:42:55 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:26.330 10:42:55 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:26.330 10:42:55 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:04:26.330 10:42:55 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:04:26.330 10:42:55 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:04:26.330 10:42:55 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:04:26.330 10:42:55 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:26.330 10:42:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.330 10:42:55 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:26.330 10:42:55 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:04:26.330 10:42:55 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:04:26.330 10:42:55 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:26.330 10:42:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:26.588 MallocForNvmf0 00:04:26.589 10:42:56 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:26.589 10:42:56 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:26.848 MallocForNvmf1 00:04:26.848 10:42:56 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:26.848 10:42:56 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:26.848 [2024-07-25 10:42:56.574853] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:27.107 10:42:56 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:27.107 10:42:56 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:27.366 10:42:56 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:27.366 10:42:56 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:27.625 10:42:57 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:27.625 10:42:57 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:27.882 10:42:57 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:27.883 10:42:57 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:27.883 [2024-07-25 10:42:57.603517] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:28.141 10:42:57 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:04:28.141 10:42:57 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:28.141 10:42:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.141 10:42:57 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:04:28.141 10:42:57 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:28.141 10:42:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.141 10:42:57 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:04:28.141 10:42:57 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:28.141 10:42:57 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:28.413 MallocBdevForConfigChangeCheck 00:04:28.413 10:42:57 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:04:28.413 10:42:57 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:28.413 10:42:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.413 10:42:57 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:04:28.413 10:42:57 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:28.671 INFO: shutting down applications... 00:04:28.671 10:42:58 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:04:28.671 10:42:58 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:04:28.671 10:42:58 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:04:28.671 10:42:58 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:04:28.671 10:42:58 json_config -- json_config/json_config.sh@337 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:29.240 Calling clear_iscsi_subsystem 00:04:29.240 Calling clear_nvmf_subsystem 00:04:29.240 Calling clear_nbd_subsystem 00:04:29.240 Calling clear_ublk_subsystem 00:04:29.240 Calling clear_vhost_blk_subsystem 00:04:29.240 Calling clear_vhost_scsi_subsystem 00:04:29.240 Calling clear_bdev_subsystem 00:04:29.240 10:42:58 json_config -- json_config/json_config.sh@341 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:29.240 10:42:58 json_config -- json_config/json_config.sh@347 -- # count=100 00:04:29.240 10:42:58 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:04:29.240 10:42:58 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:29.240 10:42:58 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:29.240 10:42:58 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:29.498 10:42:59 json_config -- json_config/json_config.sh@349 -- # break 00:04:29.498 10:42:59 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:04:29.498 10:42:59 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:04:29.498 10:42:59 json_config -- json_config/common.sh@31 -- # local app=target 00:04:29.498 10:42:59 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:29.498 10:42:59 json_config -- json_config/common.sh@35 -- # [[ -n 59327 ]] 00:04:29.498 10:42:59 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59327 00:04:29.498 10:42:59 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:29.498 10:42:59 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:29.498 10:42:59 json_config -- json_config/common.sh@41 -- # kill -0 59327 00:04:29.498 10:42:59 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:30.065 10:42:59 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:30.065 10:42:59 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:30.065 10:42:59 json_config -- json_config/common.sh@41 -- # kill -0 59327 00:04:30.065 10:42:59 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:30.065 10:42:59 json_config -- json_config/common.sh@43 -- # break 00:04:30.065 10:42:59 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:30.065 SPDK target shutdown done 00:04:30.065 10:42:59 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:30.065 INFO: relaunching applications... 00:04:30.065 10:42:59 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:04:30.065 10:42:59 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:30.065 10:42:59 json_config -- json_config/common.sh@9 -- # local app=target 00:04:30.065 10:42:59 json_config -- json_config/common.sh@10 -- # shift 00:04:30.065 10:42:59 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:30.065 10:42:59 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:30.065 10:42:59 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:30.065 10:42:59 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:30.065 10:42:59 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:30.065 10:42:59 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59517 00:04:30.065 10:42:59 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:30.065 Waiting for target to run... 00:04:30.065 10:42:59 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:30.065 10:42:59 json_config -- json_config/common.sh@25 -- # waitforlisten 59517 /var/tmp/spdk_tgt.sock 00:04:30.065 10:42:59 json_config -- common/autotest_common.sh@831 -- # '[' -z 59517 ']' 00:04:30.065 10:42:59 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:30.065 10:42:59 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:30.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:30.065 10:42:59 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:30.065 10:42:59 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:30.065 10:42:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.065 [2024-07-25 10:42:59.708777] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:30.065 [2024-07-25 10:42:59.708897] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59517 ] 00:04:30.632 [2024-07-25 10:43:00.131093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.632 [2024-07-25 10:43:00.245155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.890 [2024-07-25 10:43:00.371567] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:30.890 [2024-07-25 10:43:00.586056] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:30.890 [2024-07-25 10:43:00.618127] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:31.148 10:43:00 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:31.148 10:43:00 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:31.148 00:04:31.148 10:43:00 json_config -- json_config/common.sh@26 -- # echo '' 00:04:31.148 10:43:00 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:04:31.148 INFO: Checking if target configuration is the same... 00:04:31.148 10:43:00 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:31.148 10:43:00 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:04:31.148 10:43:00 json_config -- json_config/json_config.sh@382 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:31.149 10:43:00 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:31.149 + '[' 2 -ne 2 ']' 00:04:31.149 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:31.149 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:31.149 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:31.149 +++ basename /dev/fd/62 00:04:31.149 ++ mktemp /tmp/62.XXX 00:04:31.149 + tmp_file_1=/tmp/62.roK 00:04:31.149 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:31.149 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:31.149 + tmp_file_2=/tmp/spdk_tgt_config.json.3qx 00:04:31.149 + ret=0 00:04:31.149 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:31.407 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:31.407 + diff -u /tmp/62.roK /tmp/spdk_tgt_config.json.3qx 00:04:31.407 INFO: JSON config files are the same 00:04:31.407 + echo 'INFO: JSON config files are the same' 00:04:31.407 + rm /tmp/62.roK /tmp/spdk_tgt_config.json.3qx 00:04:31.407 + exit 0 00:04:31.407 10:43:01 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:04:31.407 INFO: changing configuration and checking if this can be detected... 00:04:31.407 10:43:01 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:31.407 10:43:01 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:31.407 10:43:01 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:31.666 10:43:01 json_config -- json_config/json_config.sh@391 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:31.667 10:43:01 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:04:31.667 10:43:01 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:31.925 + '[' 2 -ne 2 ']' 00:04:31.925 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:31.925 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:31.925 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:31.925 +++ basename /dev/fd/62 00:04:31.925 ++ mktemp /tmp/62.XXX 00:04:31.925 + tmp_file_1=/tmp/62.AkH 00:04:31.925 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:31.925 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:31.925 + tmp_file_2=/tmp/spdk_tgt_config.json.0ch 00:04:31.925 + ret=0 00:04:31.925 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:32.184 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:32.184 + diff -u /tmp/62.AkH /tmp/spdk_tgt_config.json.0ch 00:04:32.184 + ret=1 00:04:32.184 + echo '=== Start of file: /tmp/62.AkH ===' 00:04:32.184 + cat /tmp/62.AkH 00:04:32.184 + echo '=== End of file: /tmp/62.AkH ===' 00:04:32.184 + echo '' 00:04:32.184 + echo '=== Start of file: /tmp/spdk_tgt_config.json.0ch ===' 00:04:32.184 + cat /tmp/spdk_tgt_config.json.0ch 00:04:32.184 + echo '=== End of file: /tmp/spdk_tgt_config.json.0ch ===' 00:04:32.184 + echo '' 00:04:32.184 + rm /tmp/62.AkH /tmp/spdk_tgt_config.json.0ch 00:04:32.184 + exit 1 00:04:32.184 INFO: configuration change detected. 00:04:32.184 10:43:01 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:04:32.184 10:43:01 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:04:32.184 10:43:01 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:04:32.184 10:43:01 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:32.184 10:43:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.184 10:43:01 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:04:32.184 10:43:01 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:04:32.184 10:43:01 json_config -- json_config/json_config.sh@321 -- # [[ -n 59517 ]] 00:04:32.184 10:43:01 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:04:32.184 10:43:01 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:04:32.184 10:43:01 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:32.184 10:43:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.184 10:43:01 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:04:32.184 10:43:01 json_config -- json_config/json_config.sh@197 -- # uname -s 00:04:32.184 10:43:01 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:04:32.184 10:43:01 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:04:32.184 10:43:01 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:04:32.184 10:43:01 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:04:32.184 10:43:01 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:32.184 10:43:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.184 10:43:01 json_config -- json_config/json_config.sh@327 -- # killprocess 59517 00:04:32.184 10:43:01 json_config -- common/autotest_common.sh@950 -- # '[' -z 59517 ']' 00:04:32.184 10:43:01 json_config -- common/autotest_common.sh@954 -- # kill -0 59517 00:04:32.184 10:43:01 json_config -- common/autotest_common.sh@955 -- # uname 00:04:32.184 10:43:01 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:32.184 10:43:01 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59517 00:04:32.443 10:43:01 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:32.443 10:43:01 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:32.443 killing process with pid 59517 00:04:32.443 10:43:01 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59517' 00:04:32.443 10:43:01 json_config -- common/autotest_common.sh@969 -- # kill 59517 00:04:32.443 10:43:01 json_config -- common/autotest_common.sh@974 -- # wait 59517 00:04:32.702 10:43:02 json_config -- json_config/json_config.sh@330 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:32.702 10:43:02 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:04:32.702 10:43:02 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:32.702 10:43:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.702 10:43:02 json_config -- json_config/json_config.sh@332 -- # return 0 00:04:32.702 10:43:02 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:04:32.702 INFO: Success 00:04:32.702 00:04:32.702 real 0m8.510s 00:04:32.702 user 0m12.216s 00:04:32.702 sys 0m1.794s 00:04:32.702 10:43:02 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:32.702 10:43:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.702 ************************************ 00:04:32.702 END TEST json_config 00:04:32.702 ************************************ 00:04:32.702 10:43:02 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:32.702 10:43:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:32.702 10:43:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:32.702 10:43:02 -- common/autotest_common.sh@10 -- # set +x 00:04:32.702 ************************************ 00:04:32.702 START TEST json_config_extra_key 00:04:32.702 ************************************ 00:04:32.702 10:43:02 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:32.702 10:43:02 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:32.702 10:43:02 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:32.702 10:43:02 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:32.702 10:43:02 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:32.702 10:43:02 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:32.702 10:43:02 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:32.702 10:43:02 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:32.702 10:43:02 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:32.702 10:43:02 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:32.702 10:43:02 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:32.702 10:43:02 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:32.702 10:43:02 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:32.702 10:43:02 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:04:32.702 10:43:02 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=bb4b8bd3-cfb4-4368-bf29-91254747069c 00:04:32.702 10:43:02 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:32.702 10:43:02 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:32.702 10:43:02 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:32.702 10:43:02 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:32.702 10:43:02 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:32.702 10:43:02 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:32.702 10:43:02 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:32.702 10:43:02 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:32.702 10:43:02 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.702 10:43:02 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.702 10:43:02 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.702 10:43:02 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:32.702 10:43:02 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.702 10:43:02 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:32.702 10:43:02 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:32.702 10:43:02 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:32.702 10:43:02 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:32.702 10:43:02 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:32.702 10:43:02 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:32.702 10:43:02 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:32.702 10:43:02 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:32.702 10:43:02 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:32.702 10:43:02 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:32.702 10:43:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:32.702 10:43:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:32.702 10:43:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:32.702 10:43:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:32.702 10:43:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:32.702 10:43:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:32.702 10:43:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:32.702 10:43:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:32.702 10:43:02 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:32.702 INFO: launching applications... 00:04:32.702 10:43:02 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:32.702 10:43:02 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:32.702 10:43:02 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:32.702 10:43:02 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:32.702 10:43:02 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:32.702 10:43:02 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:32.702 10:43:02 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:32.702 10:43:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:32.702 10:43:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:32.702 10:43:02 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59663 00:04:32.702 Waiting for target to run... 00:04:32.702 10:43:02 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:32.702 10:43:02 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59663 /var/tmp/spdk_tgt.sock 00:04:32.702 10:43:02 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:32.702 10:43:02 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 59663 ']' 00:04:32.702 10:43:02 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:32.702 10:43:02 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:32.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:32.702 10:43:02 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:32.702 10:43:02 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:32.702 10:43:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:32.960 [2024-07-25 10:43:02.491563] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:32.960 [2024-07-25 10:43:02.491675] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59663 ] 00:04:33.234 [2024-07-25 10:43:02.901654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.491 [2024-07-25 10:43:03.014416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.491 [2024-07-25 10:43:03.034788] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:34.055 10:43:03 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:34.055 00:04:34.055 10:43:03 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:34.055 10:43:03 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:34.055 INFO: shutting down applications... 00:04:34.055 10:43:03 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:34.055 10:43:03 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:34.055 10:43:03 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:34.055 10:43:03 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:34.055 10:43:03 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59663 ]] 00:04:34.055 10:43:03 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59663 00:04:34.055 10:43:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:34.055 10:43:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:34.055 10:43:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59663 00:04:34.055 10:43:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:34.313 10:43:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:34.313 10:43:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:34.313 10:43:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59663 00:04:34.313 10:43:04 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:34.877 10:43:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:34.877 10:43:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:34.877 10:43:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59663 00:04:34.877 10:43:04 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:34.877 10:43:04 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:34.877 10:43:04 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:34.877 10:43:04 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:34.877 SPDK target shutdown done 00:04:34.877 Success 00:04:34.877 10:43:04 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:34.877 00:04:34.877 real 0m2.162s 00:04:34.877 user 0m1.766s 00:04:34.877 sys 0m0.434s 00:04:34.877 10:43:04 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:34.877 ************************************ 00:04:34.877 END TEST json_config_extra_key 00:04:34.877 10:43:04 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:34.877 ************************************ 00:04:34.877 10:43:04 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:34.877 10:43:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:34.877 10:43:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:34.877 10:43:04 -- common/autotest_common.sh@10 -- # set +x 00:04:34.877 ************************************ 00:04:34.877 START TEST alias_rpc 00:04:34.877 ************************************ 00:04:34.877 10:43:04 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:35.135 * Looking for test storage... 00:04:35.135 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:35.135 10:43:04 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:35.135 10:43:04 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59734 00:04:35.135 10:43:04 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:35.135 10:43:04 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59734 00:04:35.135 10:43:04 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 59734 ']' 00:04:35.135 10:43:04 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.135 10:43:04 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:35.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.135 10:43:04 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.135 10:43:04 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:35.135 10:43:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.135 [2024-07-25 10:43:04.726647] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:35.135 [2024-07-25 10:43:04.726753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59734 ] 00:04:35.135 [2024-07-25 10:43:04.861414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.393 [2024-07-25 10:43:04.992461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.393 [2024-07-25 10:43:05.065892] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:36.326 10:43:05 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:36.326 10:43:05 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:36.326 10:43:05 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:36.583 10:43:06 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59734 00:04:36.583 10:43:06 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 59734 ']' 00:04:36.583 10:43:06 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 59734 00:04:36.583 10:43:06 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:04:36.583 10:43:06 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:36.583 10:43:06 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59734 00:04:36.583 10:43:06 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:36.584 10:43:06 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:36.584 10:43:06 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59734' 00:04:36.584 killing process with pid 59734 00:04:36.584 10:43:06 alias_rpc -- common/autotest_common.sh@969 -- # kill 59734 00:04:36.584 10:43:06 alias_rpc -- common/autotest_common.sh@974 -- # wait 59734 00:04:37.149 00:04:37.149 real 0m2.075s 00:04:37.149 user 0m2.356s 00:04:37.149 sys 0m0.518s 00:04:37.149 10:43:06 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.149 10:43:06 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.149 ************************************ 00:04:37.149 END TEST alias_rpc 00:04:37.149 ************************************ 00:04:37.149 10:43:06 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:37.149 10:43:06 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:37.149 10:43:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.149 10:43:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.149 10:43:06 -- common/autotest_common.sh@10 -- # set +x 00:04:37.149 ************************************ 00:04:37.149 START TEST spdkcli_tcp 00:04:37.149 ************************************ 00:04:37.149 10:43:06 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:37.149 * Looking for test storage... 00:04:37.149 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:37.149 10:43:06 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:37.149 10:43:06 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:37.149 10:43:06 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:37.149 10:43:06 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:37.149 10:43:06 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:37.149 10:43:06 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:37.149 10:43:06 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:37.149 10:43:06 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:37.149 10:43:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:37.149 10:43:06 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59810 00:04:37.149 10:43:06 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59810 00:04:37.149 10:43:06 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:37.149 10:43:06 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 59810 ']' 00:04:37.149 10:43:06 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.149 10:43:06 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:37.149 10:43:06 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.149 10:43:06 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:37.149 10:43:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:37.149 [2024-07-25 10:43:06.847234] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:37.149 [2024-07-25 10:43:06.847753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59810 ] 00:04:37.407 [2024-07-25 10:43:06.976248] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:37.407 [2024-07-25 10:43:07.117637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:37.407 [2024-07-25 10:43:07.117651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.665 [2024-07-25 10:43:07.191038] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:38.232 10:43:07 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:38.232 10:43:07 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:04:38.232 10:43:07 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:38.232 10:43:07 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59827 00:04:38.232 10:43:07 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:38.491 [ 00:04:38.491 "bdev_malloc_delete", 00:04:38.491 "bdev_malloc_create", 00:04:38.491 "bdev_null_resize", 00:04:38.491 "bdev_null_delete", 00:04:38.491 "bdev_null_create", 00:04:38.491 "bdev_nvme_cuse_unregister", 00:04:38.491 "bdev_nvme_cuse_register", 00:04:38.491 "bdev_opal_new_user", 00:04:38.491 "bdev_opal_set_lock_state", 00:04:38.491 "bdev_opal_delete", 00:04:38.491 "bdev_opal_get_info", 00:04:38.491 "bdev_opal_create", 00:04:38.491 "bdev_nvme_opal_revert", 00:04:38.491 "bdev_nvme_opal_init", 00:04:38.491 "bdev_nvme_send_cmd", 00:04:38.491 "bdev_nvme_get_path_iostat", 00:04:38.491 "bdev_nvme_get_mdns_discovery_info", 00:04:38.491 "bdev_nvme_stop_mdns_discovery", 00:04:38.491 "bdev_nvme_start_mdns_discovery", 00:04:38.491 "bdev_nvme_set_multipath_policy", 00:04:38.491 "bdev_nvme_set_preferred_path", 00:04:38.491 "bdev_nvme_get_io_paths", 00:04:38.491 "bdev_nvme_remove_error_injection", 00:04:38.491 "bdev_nvme_add_error_injection", 00:04:38.491 "bdev_nvme_get_discovery_info", 00:04:38.491 "bdev_nvme_stop_discovery", 00:04:38.491 "bdev_nvme_start_discovery", 00:04:38.491 "bdev_nvme_get_controller_health_info", 00:04:38.491 "bdev_nvme_disable_controller", 00:04:38.491 "bdev_nvme_enable_controller", 00:04:38.491 "bdev_nvme_reset_controller", 00:04:38.491 "bdev_nvme_get_transport_statistics", 00:04:38.491 "bdev_nvme_apply_firmware", 00:04:38.491 "bdev_nvme_detach_controller", 00:04:38.491 "bdev_nvme_get_controllers", 00:04:38.491 "bdev_nvme_attach_controller", 00:04:38.491 "bdev_nvme_set_hotplug", 00:04:38.491 "bdev_nvme_set_options", 00:04:38.491 "bdev_passthru_delete", 00:04:38.491 "bdev_passthru_create", 00:04:38.491 "bdev_lvol_set_parent_bdev", 00:04:38.491 "bdev_lvol_set_parent", 00:04:38.491 "bdev_lvol_check_shallow_copy", 00:04:38.491 "bdev_lvol_start_shallow_copy", 00:04:38.491 "bdev_lvol_grow_lvstore", 00:04:38.491 "bdev_lvol_get_lvols", 00:04:38.491 "bdev_lvol_get_lvstores", 00:04:38.491 "bdev_lvol_delete", 00:04:38.491 "bdev_lvol_set_read_only", 00:04:38.491 "bdev_lvol_resize", 00:04:38.491 "bdev_lvol_decouple_parent", 00:04:38.491 "bdev_lvol_inflate", 00:04:38.491 "bdev_lvol_rename", 00:04:38.491 "bdev_lvol_clone_bdev", 00:04:38.491 "bdev_lvol_clone", 00:04:38.491 "bdev_lvol_snapshot", 00:04:38.491 "bdev_lvol_create", 00:04:38.491 "bdev_lvol_delete_lvstore", 00:04:38.491 "bdev_lvol_rename_lvstore", 00:04:38.491 "bdev_lvol_create_lvstore", 00:04:38.491 "bdev_raid_set_options", 00:04:38.491 "bdev_raid_remove_base_bdev", 00:04:38.491 "bdev_raid_add_base_bdev", 00:04:38.491 "bdev_raid_delete", 00:04:38.491 "bdev_raid_create", 00:04:38.491 "bdev_raid_get_bdevs", 00:04:38.491 "bdev_error_inject_error", 00:04:38.491 "bdev_error_delete", 00:04:38.491 "bdev_error_create", 00:04:38.491 "bdev_split_delete", 00:04:38.491 "bdev_split_create", 00:04:38.491 "bdev_delay_delete", 00:04:38.491 "bdev_delay_create", 00:04:38.491 "bdev_delay_update_latency", 00:04:38.491 "bdev_zone_block_delete", 00:04:38.491 "bdev_zone_block_create", 00:04:38.491 "blobfs_create", 00:04:38.491 "blobfs_detect", 00:04:38.491 "blobfs_set_cache_size", 00:04:38.491 "bdev_aio_delete", 00:04:38.491 "bdev_aio_rescan", 00:04:38.491 "bdev_aio_create", 00:04:38.491 "bdev_ftl_set_property", 00:04:38.491 "bdev_ftl_get_properties", 00:04:38.491 "bdev_ftl_get_stats", 00:04:38.491 "bdev_ftl_unmap", 00:04:38.491 "bdev_ftl_unload", 00:04:38.491 "bdev_ftl_delete", 00:04:38.491 "bdev_ftl_load", 00:04:38.491 "bdev_ftl_create", 00:04:38.491 "bdev_virtio_attach_controller", 00:04:38.491 "bdev_virtio_scsi_get_devices", 00:04:38.491 "bdev_virtio_detach_controller", 00:04:38.491 "bdev_virtio_blk_set_hotplug", 00:04:38.491 "bdev_iscsi_delete", 00:04:38.491 "bdev_iscsi_create", 00:04:38.491 "bdev_iscsi_set_options", 00:04:38.491 "bdev_uring_delete", 00:04:38.491 "bdev_uring_rescan", 00:04:38.491 "bdev_uring_create", 00:04:38.491 "accel_error_inject_error", 00:04:38.491 "ioat_scan_accel_module", 00:04:38.491 "dsa_scan_accel_module", 00:04:38.491 "iaa_scan_accel_module", 00:04:38.491 "keyring_file_remove_key", 00:04:38.491 "keyring_file_add_key", 00:04:38.491 "keyring_linux_set_options", 00:04:38.491 "iscsi_get_histogram", 00:04:38.491 "iscsi_enable_histogram", 00:04:38.491 "iscsi_set_options", 00:04:38.491 "iscsi_get_auth_groups", 00:04:38.491 "iscsi_auth_group_remove_secret", 00:04:38.491 "iscsi_auth_group_add_secret", 00:04:38.491 "iscsi_delete_auth_group", 00:04:38.491 "iscsi_create_auth_group", 00:04:38.491 "iscsi_set_discovery_auth", 00:04:38.491 "iscsi_get_options", 00:04:38.491 "iscsi_target_node_request_logout", 00:04:38.491 "iscsi_target_node_set_redirect", 00:04:38.491 "iscsi_target_node_set_auth", 00:04:38.491 "iscsi_target_node_add_lun", 00:04:38.491 "iscsi_get_stats", 00:04:38.491 "iscsi_get_connections", 00:04:38.491 "iscsi_portal_group_set_auth", 00:04:38.491 "iscsi_start_portal_group", 00:04:38.491 "iscsi_delete_portal_group", 00:04:38.491 "iscsi_create_portal_group", 00:04:38.491 "iscsi_get_portal_groups", 00:04:38.491 "iscsi_delete_target_node", 00:04:38.491 "iscsi_target_node_remove_pg_ig_maps", 00:04:38.491 "iscsi_target_node_add_pg_ig_maps", 00:04:38.491 "iscsi_create_target_node", 00:04:38.491 "iscsi_get_target_nodes", 00:04:38.491 "iscsi_delete_initiator_group", 00:04:38.491 "iscsi_initiator_group_remove_initiators", 00:04:38.491 "iscsi_initiator_group_add_initiators", 00:04:38.491 "iscsi_create_initiator_group", 00:04:38.491 "iscsi_get_initiator_groups", 00:04:38.491 "nvmf_set_crdt", 00:04:38.491 "nvmf_set_config", 00:04:38.491 "nvmf_set_max_subsystems", 00:04:38.491 "nvmf_stop_mdns_prr", 00:04:38.491 "nvmf_publish_mdns_prr", 00:04:38.491 "nvmf_subsystem_get_listeners", 00:04:38.491 "nvmf_subsystem_get_qpairs", 00:04:38.491 "nvmf_subsystem_get_controllers", 00:04:38.491 "nvmf_get_stats", 00:04:38.491 "nvmf_get_transports", 00:04:38.491 "nvmf_create_transport", 00:04:38.491 "nvmf_get_targets", 00:04:38.491 "nvmf_delete_target", 00:04:38.491 "nvmf_create_target", 00:04:38.491 "nvmf_subsystem_allow_any_host", 00:04:38.491 "nvmf_subsystem_remove_host", 00:04:38.491 "nvmf_subsystem_add_host", 00:04:38.491 "nvmf_ns_remove_host", 00:04:38.491 "nvmf_ns_add_host", 00:04:38.491 "nvmf_subsystem_remove_ns", 00:04:38.491 "nvmf_subsystem_add_ns", 00:04:38.491 "nvmf_subsystem_listener_set_ana_state", 00:04:38.491 "nvmf_discovery_get_referrals", 00:04:38.491 "nvmf_discovery_remove_referral", 00:04:38.491 "nvmf_discovery_add_referral", 00:04:38.491 "nvmf_subsystem_remove_listener", 00:04:38.491 "nvmf_subsystem_add_listener", 00:04:38.491 "nvmf_delete_subsystem", 00:04:38.491 "nvmf_create_subsystem", 00:04:38.491 "nvmf_get_subsystems", 00:04:38.491 "env_dpdk_get_mem_stats", 00:04:38.491 "nbd_get_disks", 00:04:38.491 "nbd_stop_disk", 00:04:38.491 "nbd_start_disk", 00:04:38.491 "ublk_recover_disk", 00:04:38.491 "ublk_get_disks", 00:04:38.491 "ublk_stop_disk", 00:04:38.491 "ublk_start_disk", 00:04:38.491 "ublk_destroy_target", 00:04:38.491 "ublk_create_target", 00:04:38.491 "virtio_blk_create_transport", 00:04:38.491 "virtio_blk_get_transports", 00:04:38.491 "vhost_controller_set_coalescing", 00:04:38.491 "vhost_get_controllers", 00:04:38.492 "vhost_delete_controller", 00:04:38.492 "vhost_create_blk_controller", 00:04:38.492 "vhost_scsi_controller_remove_target", 00:04:38.492 "vhost_scsi_controller_add_target", 00:04:38.492 "vhost_start_scsi_controller", 00:04:38.492 "vhost_create_scsi_controller", 00:04:38.492 "thread_set_cpumask", 00:04:38.492 "framework_get_governor", 00:04:38.492 "framework_get_scheduler", 00:04:38.492 "framework_set_scheduler", 00:04:38.492 "framework_get_reactors", 00:04:38.492 "thread_get_io_channels", 00:04:38.492 "thread_get_pollers", 00:04:38.492 "thread_get_stats", 00:04:38.492 "framework_monitor_context_switch", 00:04:38.492 "spdk_kill_instance", 00:04:38.492 "log_enable_timestamps", 00:04:38.492 "log_get_flags", 00:04:38.492 "log_clear_flag", 00:04:38.492 "log_set_flag", 00:04:38.492 "log_get_level", 00:04:38.492 "log_set_level", 00:04:38.492 "log_get_print_level", 00:04:38.492 "log_set_print_level", 00:04:38.492 "framework_enable_cpumask_locks", 00:04:38.492 "framework_disable_cpumask_locks", 00:04:38.492 "framework_wait_init", 00:04:38.492 "framework_start_init", 00:04:38.492 "scsi_get_devices", 00:04:38.492 "bdev_get_histogram", 00:04:38.492 "bdev_enable_histogram", 00:04:38.492 "bdev_set_qos_limit", 00:04:38.492 "bdev_set_qd_sampling_period", 00:04:38.492 "bdev_get_bdevs", 00:04:38.492 "bdev_reset_iostat", 00:04:38.492 "bdev_get_iostat", 00:04:38.492 "bdev_examine", 00:04:38.492 "bdev_wait_for_examine", 00:04:38.492 "bdev_set_options", 00:04:38.492 "notify_get_notifications", 00:04:38.492 "notify_get_types", 00:04:38.492 "accel_get_stats", 00:04:38.492 "accel_set_options", 00:04:38.492 "accel_set_driver", 00:04:38.492 "accel_crypto_key_destroy", 00:04:38.492 "accel_crypto_keys_get", 00:04:38.492 "accel_crypto_key_create", 00:04:38.492 "accel_assign_opc", 00:04:38.492 "accel_get_module_info", 00:04:38.492 "accel_get_opc_assignments", 00:04:38.492 "vmd_rescan", 00:04:38.492 "vmd_remove_device", 00:04:38.492 "vmd_enable", 00:04:38.492 "sock_get_default_impl", 00:04:38.492 "sock_set_default_impl", 00:04:38.492 "sock_impl_set_options", 00:04:38.492 "sock_impl_get_options", 00:04:38.492 "iobuf_get_stats", 00:04:38.492 "iobuf_set_options", 00:04:38.492 "framework_get_pci_devices", 00:04:38.492 "framework_get_config", 00:04:38.492 "framework_get_subsystems", 00:04:38.492 "trace_get_info", 00:04:38.492 "trace_get_tpoint_group_mask", 00:04:38.492 "trace_disable_tpoint_group", 00:04:38.492 "trace_enable_tpoint_group", 00:04:38.492 "trace_clear_tpoint_mask", 00:04:38.492 "trace_set_tpoint_mask", 00:04:38.492 "keyring_get_keys", 00:04:38.492 "spdk_get_version", 00:04:38.492 "rpc_get_methods" 00:04:38.492 ] 00:04:38.492 10:43:08 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:38.492 10:43:08 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:38.492 10:43:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:38.492 10:43:08 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:38.492 10:43:08 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59810 00:04:38.492 10:43:08 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 59810 ']' 00:04:38.492 10:43:08 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 59810 00:04:38.492 10:43:08 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:04:38.492 10:43:08 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:38.492 10:43:08 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59810 00:04:38.492 killing process with pid 59810 00:04:38.492 10:43:08 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:38.492 10:43:08 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:38.492 10:43:08 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59810' 00:04:38.492 10:43:08 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 59810 00:04:38.492 10:43:08 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 59810 00:04:39.057 ************************************ 00:04:39.057 END TEST spdkcli_tcp 00:04:39.057 ************************************ 00:04:39.057 00:04:39.057 real 0m2.050s 00:04:39.057 user 0m3.733s 00:04:39.057 sys 0m0.588s 00:04:39.057 10:43:08 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.057 10:43:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:39.057 10:43:08 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:39.057 10:43:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:39.057 10:43:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.057 10:43:08 -- common/autotest_common.sh@10 -- # set +x 00:04:39.315 ************************************ 00:04:39.315 START TEST dpdk_mem_utility 00:04:39.315 ************************************ 00:04:39.315 10:43:08 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:39.315 * Looking for test storage... 00:04:39.315 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:39.315 10:43:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:39.315 10:43:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59907 00:04:39.315 10:43:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.315 10:43:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59907 00:04:39.315 10:43:08 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 59907 ']' 00:04:39.315 10:43:08 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.315 10:43:08 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:39.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.315 10:43:08 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.315 10:43:08 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:39.315 10:43:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:39.315 [2024-07-25 10:43:08.931817] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:39.315 [2024-07-25 10:43:08.931933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59907 ] 00:04:39.572 [2024-07-25 10:43:09.065671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.572 [2024-07-25 10:43:09.192971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.572 [2024-07-25 10:43:09.267046] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:40.507 10:43:09 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:40.507 10:43:09 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:04:40.507 10:43:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:40.507 10:43:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:40.507 10:43:09 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.507 10:43:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:40.507 { 00:04:40.507 "filename": "/tmp/spdk_mem_dump.txt" 00:04:40.507 } 00:04:40.507 10:43:09 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.507 10:43:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:40.507 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:40.507 1 heaps totaling size 814.000000 MiB 00:04:40.507 size: 814.000000 MiB heap id: 0 00:04:40.507 end heaps---------- 00:04:40.507 8 mempools totaling size 598.116089 MiB 00:04:40.507 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:40.507 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:40.507 size: 84.521057 MiB name: bdev_io_59907 00:04:40.507 size: 51.011292 MiB name: evtpool_59907 00:04:40.507 size: 50.003479 MiB name: msgpool_59907 00:04:40.507 size: 21.763794 MiB name: PDU_Pool 00:04:40.507 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:40.507 size: 0.026123 MiB name: Session_Pool 00:04:40.507 end mempools------- 00:04:40.507 6 memzones totaling size 4.142822 MiB 00:04:40.507 size: 1.000366 MiB name: RG_ring_0_59907 00:04:40.507 size: 1.000366 MiB name: RG_ring_1_59907 00:04:40.507 size: 1.000366 MiB name: RG_ring_4_59907 00:04:40.507 size: 1.000366 MiB name: RG_ring_5_59907 00:04:40.507 size: 0.125366 MiB name: RG_ring_2_59907 00:04:40.507 size: 0.015991 MiB name: RG_ring_3_59907 00:04:40.507 end memzones------- 00:04:40.507 10:43:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:40.507 heap id: 0 total size: 814.000000 MiB number of busy elements: 302 number of free elements: 15 00:04:40.507 list of free elements. size: 12.471558 MiB 00:04:40.507 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:40.507 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:40.507 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:40.507 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:40.507 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:40.507 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:40.507 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:40.507 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:40.507 element at address: 0x200000200000 with size: 0.833191 MiB 00:04:40.507 element at address: 0x20001aa00000 with size: 0.568604 MiB 00:04:40.507 element at address: 0x20000b200000 with size: 0.488892 MiB 00:04:40.507 element at address: 0x200000800000 with size: 0.486328 MiB 00:04:40.507 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:40.507 element at address: 0x200027e00000 with size: 0.395935 MiB 00:04:40.507 element at address: 0x200003a00000 with size: 0.347839 MiB 00:04:40.507 list of standard malloc elements. size: 199.265869 MiB 00:04:40.507 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:40.507 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:40.507 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:40.507 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:40.507 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:40.507 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:40.507 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:40.507 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:40.507 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:40.507 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:40.507 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:40.507 element at address: 0x20000087c800 with size: 0.000183 MiB 00:04:40.507 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:04:40.507 element at address: 0x20000087c980 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:40.508 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003a59180 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003a59240 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003a59300 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003a59480 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003a59540 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003a59600 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003a59780 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003a59840 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003a59900 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:40.508 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:40.508 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:40.508 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:40.508 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:40.508 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:40.508 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:04:40.508 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:04:40.509 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:40.509 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e65680 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6c280 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:40.509 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:40.509 list of memzone associated elements. size: 602.262573 MiB 00:04:40.509 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:40.509 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:40.509 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:40.509 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:40.509 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:40.509 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_59907_0 00:04:40.509 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:40.509 associated memzone info: size: 48.002930 MiB name: MP_evtpool_59907_0 00:04:40.509 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:40.509 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59907_0 00:04:40.509 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:40.509 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:40.509 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:40.509 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:40.509 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:40.509 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_59907 00:04:40.509 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:40.509 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59907 00:04:40.509 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:40.509 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59907 00:04:40.509 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:40.509 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:40.509 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:40.509 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:40.509 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:40.509 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:40.509 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:40.509 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:40.509 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:40.509 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59907 00:04:40.509 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:40.509 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59907 00:04:40.509 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:40.509 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59907 00:04:40.509 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:40.509 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59907 00:04:40.509 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:40.509 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59907 00:04:40.509 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:40.509 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:40.509 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:40.509 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:40.509 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:40.509 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:40.509 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:40.509 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59907 00:04:40.509 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:40.509 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:40.509 element at address: 0x200027e65740 with size: 0.023743 MiB 00:04:40.509 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:40.509 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:40.509 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59907 00:04:40.509 element at address: 0x200027e6b880 with size: 0.002441 MiB 00:04:40.509 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:40.509 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:04:40.509 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59907 00:04:40.509 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:40.509 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59907 00:04:40.509 element at address: 0x200027e6c340 with size: 0.000305 MiB 00:04:40.509 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:40.509 10:43:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:40.509 10:43:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59907 00:04:40.509 10:43:10 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 59907 ']' 00:04:40.510 10:43:10 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 59907 00:04:40.510 10:43:10 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:04:40.510 10:43:10 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:40.510 10:43:10 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59907 00:04:40.510 killing process with pid 59907 00:04:40.510 10:43:10 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:40.510 10:43:10 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:40.510 10:43:10 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59907' 00:04:40.510 10:43:10 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 59907 00:04:40.510 10:43:10 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 59907 00:04:41.076 ************************************ 00:04:41.076 END TEST dpdk_mem_utility 00:04:41.076 ************************************ 00:04:41.076 00:04:41.076 real 0m1.851s 00:04:41.076 user 0m1.896s 00:04:41.076 sys 0m0.507s 00:04:41.076 10:43:10 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:41.076 10:43:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:41.076 10:43:10 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:41.076 10:43:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:41.076 10:43:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.076 10:43:10 -- common/autotest_common.sh@10 -- # set +x 00:04:41.076 ************************************ 00:04:41.076 START TEST event 00:04:41.076 ************************************ 00:04:41.076 10:43:10 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:41.076 * Looking for test storage... 00:04:41.076 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:41.076 10:43:10 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:41.076 10:43:10 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:41.076 10:43:10 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:41.076 10:43:10 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:04:41.076 10:43:10 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.076 10:43:10 event -- common/autotest_common.sh@10 -- # set +x 00:04:41.076 ************************************ 00:04:41.076 START TEST event_perf 00:04:41.076 ************************************ 00:04:41.076 10:43:10 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:41.334 Running I/O for 1 seconds...[2024-07-25 10:43:10.820911] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:41.334 [2024-07-25 10:43:10.821006] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59978 ] 00:04:41.334 [2024-07-25 10:43:10.960802] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:41.592 [2024-07-25 10:43:11.090557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.592 [2024-07-25 10:43:11.090672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:41.592 [2024-07-25 10:43:11.090810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:41.593 [2024-07-25 10:43:11.090818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.529 Running I/O for 1 seconds... 00:04:42.529 lcore 0: 124978 00:04:42.529 lcore 1: 124977 00:04:42.529 lcore 2: 124978 00:04:42.529 lcore 3: 124977 00:04:42.529 done. 00:04:42.529 00:04:42.529 real 0m1.407s 00:04:42.529 user 0m4.214s 00:04:42.529 sys 0m0.072s 00:04:42.529 10:43:12 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.529 10:43:12 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:42.529 ************************************ 00:04:42.529 END TEST event_perf 00:04:42.529 ************************************ 00:04:42.529 10:43:12 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:42.529 10:43:12 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:42.529 10:43:12 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.529 10:43:12 event -- common/autotest_common.sh@10 -- # set +x 00:04:42.529 ************************************ 00:04:42.529 START TEST event_reactor 00:04:42.529 ************************************ 00:04:42.529 10:43:12 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:42.788 [2024-07-25 10:43:12.283820] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:42.788 [2024-07-25 10:43:12.284005] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60017 ] 00:04:42.788 [2024-07-25 10:43:12.425221] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.047 [2024-07-25 10:43:12.544132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.981 test_start 00:04:43.981 oneshot 00:04:43.981 tick 100 00:04:43.981 tick 100 00:04:43.981 tick 250 00:04:43.981 tick 100 00:04:43.981 tick 100 00:04:43.981 tick 250 00:04:43.981 tick 500 00:04:43.981 tick 100 00:04:43.981 tick 100 00:04:43.981 tick 100 00:04:43.981 tick 250 00:04:43.981 tick 100 00:04:43.981 tick 100 00:04:43.981 test_end 00:04:43.981 ************************************ 00:04:43.981 END TEST event_reactor 00:04:43.981 ************************************ 00:04:43.981 00:04:43.981 real 0m1.388s 00:04:43.981 user 0m1.211s 00:04:43.981 sys 0m0.073s 00:04:43.981 10:43:13 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.981 10:43:13 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:43.981 10:43:13 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:43.981 10:43:13 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:43.981 10:43:13 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.981 10:43:13 event -- common/autotest_common.sh@10 -- # set +x 00:04:43.981 ************************************ 00:04:43.981 START TEST event_reactor_perf 00:04:43.981 ************************************ 00:04:43.981 10:43:13 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:44.239 [2024-07-25 10:43:13.729029] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:44.239 [2024-07-25 10:43:13.729310] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60052 ] 00:04:44.239 [2024-07-25 10:43:13.867833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.497 [2024-07-25 10:43:13.994752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.442 test_start 00:04:45.442 test_end 00:04:45.442 Performance: 392522 events per second 00:04:45.442 00:04:45.442 real 0m1.396s 00:04:45.442 user 0m1.225s 00:04:45.442 sys 0m0.064s 00:04:45.442 10:43:15 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:45.442 ************************************ 00:04:45.442 END TEST event_reactor_perf 00:04:45.442 ************************************ 00:04:45.442 10:43:15 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:45.442 10:43:15 event -- event/event.sh@49 -- # uname -s 00:04:45.442 10:43:15 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:45.442 10:43:15 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:45.442 10:43:15 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:45.442 10:43:15 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:45.442 10:43:15 event -- common/autotest_common.sh@10 -- # set +x 00:04:45.442 ************************************ 00:04:45.442 START TEST event_scheduler 00:04:45.442 ************************************ 00:04:45.442 10:43:15 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:45.700 * Looking for test storage... 00:04:45.700 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:45.700 10:43:15 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:45.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.700 10:43:15 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60114 00:04:45.700 10:43:15 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:45.700 10:43:15 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60114 00:04:45.700 10:43:15 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:45.700 10:43:15 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 60114 ']' 00:04:45.700 10:43:15 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.700 10:43:15 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:45.700 10:43:15 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.700 10:43:15 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:45.700 10:43:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:45.700 [2024-07-25 10:43:15.288355] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:45.700 [2024-07-25 10:43:15.288627] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60114 ] 00:04:45.700 [2024-07-25 10:43:15.428545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:45.959 [2024-07-25 10:43:15.591428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.959 [2024-07-25 10:43:15.591763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:45.959 [2024-07-25 10:43:15.591621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:45.959 [2024-07-25 10:43:15.591757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:46.525 10:43:16 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:46.525 10:43:16 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:04:46.525 10:43:16 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:46.525 10:43:16 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.525 10:43:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:46.525 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:46.525 POWER: Cannot set governor of lcore 0 to userspace 00:04:46.525 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:46.525 POWER: Cannot set governor of lcore 0 to performance 00:04:46.525 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:46.525 POWER: Cannot set governor of lcore 0 to userspace 00:04:46.525 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:46.525 POWER: Cannot set governor of lcore 0 to userspace 00:04:46.525 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:46.525 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:46.525 POWER: Unable to set Power Management Environment for lcore 0 00:04:46.525 [2024-07-25 10:43:16.254888] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:04:46.525 [2024-07-25 10:43:16.255000] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:04:46.525 [2024-07-25 10:43:16.255096] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:46.525 [2024-07-25 10:43:16.255198] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:46.525 [2024-07-25 10:43:16.255239] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:46.525 [2024-07-25 10:43:16.255303] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:46.525 10:43:16 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.525 10:43:16 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:46.525 10:43:16 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.525 10:43:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:46.783 [2024-07-25 10:43:16.335684] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:46.783 [2024-07-25 10:43:16.381384] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:46.784 10:43:16 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.784 10:43:16 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:46.784 10:43:16 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:46.784 10:43:16 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.784 10:43:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:46.784 ************************************ 00:04:46.784 START TEST scheduler_create_thread 00:04:46.784 ************************************ 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.784 2 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.784 3 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.784 4 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.784 5 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.784 6 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.784 7 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.784 8 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.784 9 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.784 10 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.784 10:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.686 10:43:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.686 10:43:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:48.686 10:43:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:48.686 10:43:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.686 10:43:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.636 10:43:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.636 00:04:49.636 real 0m2.617s 00:04:49.636 user 0m0.016s 00:04:49.636 sys 0m0.009s 00:04:49.636 ************************************ 00:04:49.637 END TEST scheduler_create_thread 00:04:49.637 ************************************ 00:04:49.637 10:43:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.637 10:43:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.637 10:43:19 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:49.637 10:43:19 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60114 00:04:49.637 10:43:19 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 60114 ']' 00:04:49.637 10:43:19 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 60114 00:04:49.637 10:43:19 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:04:49.637 10:43:19 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:49.637 10:43:19 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60114 00:04:49.637 killing process with pid 60114 00:04:49.637 10:43:19 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:04:49.637 10:43:19 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:04:49.637 10:43:19 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60114' 00:04:49.637 10:43:19 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 60114 00:04:49.637 10:43:19 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 60114 00:04:49.896 [2024-07-25 10:43:19.490717] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:50.154 ************************************ 00:04:50.154 END TEST event_scheduler 00:04:50.154 ************************************ 00:04:50.154 00:04:50.154 real 0m4.662s 00:04:50.154 user 0m8.490s 00:04:50.154 sys 0m0.415s 00:04:50.154 10:43:19 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:50.154 10:43:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:50.154 10:43:19 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:50.154 10:43:19 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:50.154 10:43:19 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:50.154 10:43:19 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:50.154 10:43:19 event -- common/autotest_common.sh@10 -- # set +x 00:04:50.154 ************************************ 00:04:50.154 START TEST app_repeat 00:04:50.154 ************************************ 00:04:50.155 10:43:19 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:04:50.155 10:43:19 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.155 10:43:19 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.155 10:43:19 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:50.155 10:43:19 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:50.155 10:43:19 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:50.155 10:43:19 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:50.155 10:43:19 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:50.413 Process app_repeat pid: 60213 00:04:50.413 spdk_app_start Round 0 00:04:50.413 10:43:19 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60213 00:04:50.413 10:43:19 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:50.413 10:43:19 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60213' 00:04:50.413 10:43:19 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:50.413 10:43:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:50.413 10:43:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:50.413 10:43:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60213 /var/tmp/spdk-nbd.sock 00:04:50.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:50.413 10:43:19 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60213 ']' 00:04:50.413 10:43:19 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:50.413 10:43:19 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:50.413 10:43:19 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:50.413 10:43:19 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:50.413 10:43:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:50.413 [2024-07-25 10:43:19.923262] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:50.413 [2024-07-25 10:43:19.923372] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60213 ] 00:04:50.413 [2024-07-25 10:43:20.056603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:50.671 [2024-07-25 10:43:20.196883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.671 [2024-07-25 10:43:20.196891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.672 [2024-07-25 10:43:20.273690] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:51.239 10:43:20 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:51.239 10:43:20 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:51.239 10:43:20 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:51.498 Malloc0 00:04:51.498 10:43:21 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:51.756 Malloc1 00:04:51.756 10:43:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:51.756 10:43:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.756 10:43:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:51.756 10:43:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:51.756 10:43:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.756 10:43:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:51.756 10:43:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:51.756 10:43:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.756 10:43:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:51.756 10:43:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:51.756 10:43:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.756 10:43:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:51.756 10:43:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:51.756 10:43:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:51.756 10:43:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.757 10:43:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:52.015 /dev/nbd0 00:04:52.015 10:43:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:52.015 10:43:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:52.015 10:43:21 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:52.015 10:43:21 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:52.015 10:43:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:52.015 10:43:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:52.015 10:43:21 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:52.015 10:43:21 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:52.015 10:43:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:52.015 10:43:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:52.015 10:43:21 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:52.015 1+0 records in 00:04:52.015 1+0 records out 00:04:52.015 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000706088 s, 5.8 MB/s 00:04:52.015 10:43:21 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:52.015 10:43:21 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:52.015 10:43:21 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:52.015 10:43:21 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:52.015 10:43:21 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:52.015 10:43:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:52.015 10:43:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.015 10:43:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:52.273 /dev/nbd1 00:04:52.588 10:43:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:52.588 10:43:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:52.588 10:43:22 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:52.588 10:43:22 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:52.588 10:43:22 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:52.588 10:43:22 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:52.588 10:43:22 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:52.588 10:43:22 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:52.588 10:43:22 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:52.588 10:43:22 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:52.588 10:43:22 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:52.588 1+0 records in 00:04:52.588 1+0 records out 00:04:52.588 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000850981 s, 4.8 MB/s 00:04:52.588 10:43:22 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:52.588 10:43:22 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:52.588 10:43:22 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:52.588 10:43:22 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:52.588 10:43:22 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:52.588 10:43:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:52.588 10:43:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.588 10:43:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:52.588 10:43:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.588 10:43:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:52.588 10:43:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:52.588 { 00:04:52.588 "nbd_device": "/dev/nbd0", 00:04:52.588 "bdev_name": "Malloc0" 00:04:52.588 }, 00:04:52.588 { 00:04:52.588 "nbd_device": "/dev/nbd1", 00:04:52.588 "bdev_name": "Malloc1" 00:04:52.588 } 00:04:52.588 ]' 00:04:52.588 10:43:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:52.588 10:43:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:52.588 { 00:04:52.588 "nbd_device": "/dev/nbd0", 00:04:52.588 "bdev_name": "Malloc0" 00:04:52.588 }, 00:04:52.588 { 00:04:52.588 "nbd_device": "/dev/nbd1", 00:04:52.588 "bdev_name": "Malloc1" 00:04:52.588 } 00:04:52.588 ]' 00:04:52.861 10:43:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:52.861 /dev/nbd1' 00:04:52.861 10:43:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:52.861 10:43:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:52.861 /dev/nbd1' 00:04:52.861 10:43:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:52.861 10:43:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:52.861 10:43:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:52.861 10:43:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:52.861 10:43:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:52.861 10:43:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.861 10:43:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:52.861 10:43:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:52.861 10:43:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:52.861 10:43:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:52.861 10:43:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:52.861 256+0 records in 00:04:52.861 256+0 records out 00:04:52.861 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00713987 s, 147 MB/s 00:04:52.861 10:43:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:52.861 10:43:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:52.861 256+0 records in 00:04:52.861 256+0 records out 00:04:52.861 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0274319 s, 38.2 MB/s 00:04:52.861 10:43:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:52.861 10:43:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:52.861 256+0 records in 00:04:52.861 256+0 records out 00:04:52.861 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0271932 s, 38.6 MB/s 00:04:52.861 10:43:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:52.861 10:43:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.861 10:43:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:52.861 10:43:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:52.861 10:43:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:52.861 10:43:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:52.861 10:43:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:52.861 10:43:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:52.861 10:43:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:52.861 10:43:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:52.862 10:43:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:52.862 10:43:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:52.862 10:43:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:52.862 10:43:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.862 10:43:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.862 10:43:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:52.862 10:43:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:52.862 10:43:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:52.862 10:43:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:53.120 10:43:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:53.120 10:43:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:53.120 10:43:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:53.120 10:43:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:53.120 10:43:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:53.120 10:43:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:53.120 10:43:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:53.120 10:43:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:53.120 10:43:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:53.120 10:43:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:53.379 10:43:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:53.380 10:43:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:53.380 10:43:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:53.380 10:43:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:53.380 10:43:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:53.380 10:43:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:53.380 10:43:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:53.380 10:43:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:53.380 10:43:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:53.380 10:43:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.380 10:43:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:53.639 10:43:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:53.639 10:43:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:53.639 10:43:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:53.639 10:43:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:53.639 10:43:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:53.639 10:43:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:53.639 10:43:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:53.639 10:43:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:53.639 10:43:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:53.639 10:43:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:53.639 10:43:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:53.639 10:43:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:53.639 10:43:23 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:53.897 10:43:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:54.156 [2024-07-25 10:43:23.864221] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:54.413 [2024-07-25 10:43:23.988483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.413 [2024-07-25 10:43:23.988490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.413 [2024-07-25 10:43:24.061060] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:54.413 [2024-07-25 10:43:24.061216] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:54.413 [2024-07-25 10:43:24.061231] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:57.017 spdk_app_start Round 1 00:04:57.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:57.017 10:43:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:57.017 10:43:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:57.017 10:43:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60213 /var/tmp/spdk-nbd.sock 00:04:57.017 10:43:26 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60213 ']' 00:04:57.017 10:43:26 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:57.017 10:43:26 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:57.017 10:43:26 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:57.017 10:43:26 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:57.017 10:43:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:57.276 10:43:26 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:57.276 10:43:26 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:57.276 10:43:26 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:57.534 Malloc0 00:04:57.534 10:43:27 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:57.793 Malloc1 00:04:57.793 10:43:27 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:57.794 10:43:27 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.794 10:43:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:57.794 10:43:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:57.794 10:43:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.794 10:43:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:57.794 10:43:27 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:57.794 10:43:27 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.794 10:43:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:57.794 10:43:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:57.794 10:43:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.794 10:43:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:57.794 10:43:27 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:57.794 10:43:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:57.794 10:43:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:57.794 10:43:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:58.052 /dev/nbd0 00:04:58.052 10:43:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:58.052 10:43:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:58.053 10:43:27 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:58.053 10:43:27 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:58.053 10:43:27 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:58.053 10:43:27 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:58.053 10:43:27 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:58.053 10:43:27 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:58.053 10:43:27 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:58.053 10:43:27 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:58.053 10:43:27 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:58.053 1+0 records in 00:04:58.053 1+0 records out 00:04:58.053 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000540978 s, 7.6 MB/s 00:04:58.053 10:43:27 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:58.053 10:43:27 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:58.053 10:43:27 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:58.053 10:43:27 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:58.053 10:43:27 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:58.053 10:43:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:58.053 10:43:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.053 10:43:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:58.311 /dev/nbd1 00:04:58.311 10:43:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:58.311 10:43:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:58.311 10:43:27 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:58.312 10:43:27 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:58.312 10:43:27 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:58.312 10:43:27 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:58.312 10:43:27 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:58.312 10:43:27 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:58.312 10:43:27 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:58.312 10:43:27 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:58.312 10:43:27 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:58.312 1+0 records in 00:04:58.312 1+0 records out 00:04:58.312 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000568524 s, 7.2 MB/s 00:04:58.312 10:43:27 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:58.312 10:43:27 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:58.312 10:43:27 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:58.312 10:43:27 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:58.312 10:43:27 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:58.312 10:43:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:58.312 10:43:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.312 10:43:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:58.312 10:43:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.312 10:43:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:58.570 10:43:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:58.570 { 00:04:58.570 "nbd_device": "/dev/nbd0", 00:04:58.570 "bdev_name": "Malloc0" 00:04:58.570 }, 00:04:58.570 { 00:04:58.570 "nbd_device": "/dev/nbd1", 00:04:58.570 "bdev_name": "Malloc1" 00:04:58.570 } 00:04:58.570 ]' 00:04:58.570 10:43:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:58.570 10:43:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:58.570 { 00:04:58.570 "nbd_device": "/dev/nbd0", 00:04:58.570 "bdev_name": "Malloc0" 00:04:58.570 }, 00:04:58.570 { 00:04:58.570 "nbd_device": "/dev/nbd1", 00:04:58.570 "bdev_name": "Malloc1" 00:04:58.570 } 00:04:58.570 ]' 00:04:58.570 10:43:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:58.570 /dev/nbd1' 00:04:58.570 10:43:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:58.570 /dev/nbd1' 00:04:58.570 10:43:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:58.570 10:43:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:58.570 10:43:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:58.570 10:43:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:58.570 10:43:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:58.570 10:43:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:58.570 10:43:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.570 10:43:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:58.570 10:43:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:58.570 10:43:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:58.570 10:43:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:58.570 10:43:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:58.570 256+0 records in 00:04:58.570 256+0 records out 00:04:58.570 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00725097 s, 145 MB/s 00:04:58.570 10:43:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:58.570 10:43:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:58.829 256+0 records in 00:04:58.829 256+0 records out 00:04:58.829 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0240526 s, 43.6 MB/s 00:04:58.829 10:43:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:58.829 10:43:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:58.829 256+0 records in 00:04:58.829 256+0 records out 00:04:58.829 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0326219 s, 32.1 MB/s 00:04:58.829 10:43:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:58.829 10:43:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.829 10:43:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:58.829 10:43:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:58.829 10:43:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:58.829 10:43:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:58.829 10:43:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:58.829 10:43:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:58.829 10:43:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:58.829 10:43:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:58.829 10:43:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:58.829 10:43:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:58.829 10:43:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:58.829 10:43:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.829 10:43:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.829 10:43:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:58.829 10:43:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:58.829 10:43:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:58.829 10:43:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:59.088 10:43:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:59.088 10:43:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:59.088 10:43:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:59.088 10:43:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:59.088 10:43:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:59.089 10:43:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:59.089 10:43:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:59.089 10:43:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:59.089 10:43:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:59.089 10:43:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:59.348 10:43:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:59.348 10:43:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:59.348 10:43:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:59.348 10:43:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:59.348 10:43:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:59.348 10:43:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:59.348 10:43:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:59.348 10:43:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:59.348 10:43:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:59.348 10:43:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.348 10:43:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:59.607 10:43:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:59.607 10:43:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:59.607 10:43:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:59.607 10:43:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:59.607 10:43:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:59.607 10:43:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:59.607 10:43:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:59.607 10:43:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:59.607 10:43:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:59.607 10:43:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:59.607 10:43:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:59.607 10:43:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:59.607 10:43:29 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:59.866 10:43:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:00.125 [2024-07-25 10:43:29.849795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:00.383 [2024-07-25 10:43:29.978498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.383 [2024-07-25 10:43:29.978506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.383 [2024-07-25 10:43:30.053289] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:00.383 [2024-07-25 10:43:30.053380] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:00.384 [2024-07-25 10:43:30.053393] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:02.913 spdk_app_start Round 2 00:05:02.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:02.913 10:43:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:02.913 10:43:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:02.913 10:43:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60213 /var/tmp/spdk-nbd.sock 00:05:02.913 10:43:32 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60213 ']' 00:05:02.913 10:43:32 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:02.913 10:43:32 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:02.913 10:43:32 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:02.913 10:43:32 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:02.913 10:43:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:03.172 10:43:32 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:03.172 10:43:32 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:03.172 10:43:32 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:03.431 Malloc0 00:05:03.431 10:43:33 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:03.690 Malloc1 00:05:03.690 10:43:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:03.690 10:43:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.690 10:43:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:03.690 10:43:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:03.690 10:43:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.690 10:43:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:03.690 10:43:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:03.690 10:43:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.690 10:43:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:03.690 10:43:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:03.690 10:43:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.690 10:43:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:03.690 10:43:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:03.690 10:43:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:03.690 10:43:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:03.690 10:43:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:03.947 /dev/nbd0 00:05:03.947 10:43:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:03.947 10:43:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:03.947 10:43:33 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:03.947 10:43:33 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:03.947 10:43:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:03.947 10:43:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:03.947 10:43:33 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:03.947 10:43:33 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:03.947 10:43:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:03.947 10:43:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:03.947 10:43:33 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:03.947 1+0 records in 00:05:03.947 1+0 records out 00:05:03.947 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00057902 s, 7.1 MB/s 00:05:03.947 10:43:33 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:03.947 10:43:33 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:03.947 10:43:33 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:04.205 10:43:33 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:04.205 10:43:33 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:04.205 10:43:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:04.205 10:43:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.205 10:43:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:04.463 /dev/nbd1 00:05:04.463 10:43:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:04.463 10:43:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:04.463 10:43:33 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:04.463 10:43:33 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:04.463 10:43:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:04.463 10:43:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:04.463 10:43:33 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:04.463 10:43:33 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:04.463 10:43:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:04.463 10:43:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:04.463 10:43:33 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:04.463 1+0 records in 00:05:04.463 1+0 records out 00:05:04.463 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000552083 s, 7.4 MB/s 00:05:04.463 10:43:33 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:04.463 10:43:33 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:04.463 10:43:33 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:04.463 10:43:33 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:04.463 10:43:33 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:04.463 10:43:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:04.463 10:43:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.463 10:43:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:04.463 10:43:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.463 10:43:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:04.722 10:43:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:04.722 { 00:05:04.722 "nbd_device": "/dev/nbd0", 00:05:04.722 "bdev_name": "Malloc0" 00:05:04.722 }, 00:05:04.722 { 00:05:04.722 "nbd_device": "/dev/nbd1", 00:05:04.722 "bdev_name": "Malloc1" 00:05:04.722 } 00:05:04.722 ]' 00:05:04.722 10:43:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:04.722 { 00:05:04.722 "nbd_device": "/dev/nbd0", 00:05:04.722 "bdev_name": "Malloc0" 00:05:04.722 }, 00:05:04.722 { 00:05:04.722 "nbd_device": "/dev/nbd1", 00:05:04.722 "bdev_name": "Malloc1" 00:05:04.722 } 00:05:04.722 ]' 00:05:04.722 10:43:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:04.722 10:43:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:04.722 /dev/nbd1' 00:05:04.722 10:43:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:04.722 /dev/nbd1' 00:05:04.722 10:43:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:04.722 10:43:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:04.722 10:43:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:04.722 10:43:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:04.722 10:43:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:04.722 10:43:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:04.722 10:43:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.722 10:43:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:04.722 10:43:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:04.722 10:43:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:04.722 10:43:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:04.722 10:43:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:04.722 256+0 records in 00:05:04.722 256+0 records out 00:05:04.722 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00498467 s, 210 MB/s 00:05:04.722 10:43:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:04.722 10:43:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:04.722 256+0 records in 00:05:04.722 256+0 records out 00:05:04.722 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243241 s, 43.1 MB/s 00:05:04.722 10:43:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:04.722 10:43:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:04.722 256+0 records in 00:05:04.722 256+0 records out 00:05:04.722 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0268461 s, 39.1 MB/s 00:05:04.722 10:43:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:04.722 10:43:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.722 10:43:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:04.722 10:43:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:04.722 10:43:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:04.722 10:43:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:04.722 10:43:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:04.722 10:43:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:04.722 10:43:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:04.722 10:43:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:04.722 10:43:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:04.722 10:43:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:04.722 10:43:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:04.722 10:43:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.722 10:43:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.722 10:43:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:04.722 10:43:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:04.722 10:43:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:04.722 10:43:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:04.981 10:43:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:04.981 10:43:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:04.981 10:43:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:04.981 10:43:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:04.981 10:43:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:04.981 10:43:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:04.981 10:43:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:04.981 10:43:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:04.981 10:43:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:04.981 10:43:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:05.239 10:43:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:05.239 10:43:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:05.239 10:43:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:05.239 10:43:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:05.239 10:43:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:05.239 10:43:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:05.239 10:43:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:05.239 10:43:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:05.239 10:43:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:05.239 10:43:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.239 10:43:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:05.497 10:43:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:05.497 10:43:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:05.497 10:43:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:05.497 10:43:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:05.497 10:43:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:05.497 10:43:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:05.497 10:43:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:05.497 10:43:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:05.497 10:43:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:05.497 10:43:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:05.497 10:43:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:05.497 10:43:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:05.497 10:43:35 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:06.065 10:43:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:06.324 [2024-07-25 10:43:35.828092] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:06.324 [2024-07-25 10:43:35.975688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.324 [2024-07-25 10:43:35.975700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.324 [2024-07-25 10:43:36.048960] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:06.324 [2024-07-25 10:43:36.049096] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:06.324 [2024-07-25 10:43:36.049109] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:08.857 10:43:38 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60213 /var/tmp/spdk-nbd.sock 00:05:08.857 10:43:38 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60213 ']' 00:05:08.857 10:43:38 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:08.857 10:43:38 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:08.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:08.857 10:43:38 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:08.857 10:43:38 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:08.857 10:43:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:09.114 10:43:38 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:09.114 10:43:38 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:09.114 10:43:38 event.app_repeat -- event/event.sh@39 -- # killprocess 60213 00:05:09.114 10:43:38 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 60213 ']' 00:05:09.114 10:43:38 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 60213 00:05:09.114 10:43:38 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:09.114 10:43:38 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:09.114 10:43:38 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60213 00:05:09.114 10:43:38 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:09.114 10:43:38 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:09.114 killing process with pid 60213 00:05:09.114 10:43:38 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60213' 00:05:09.114 10:43:38 event.app_repeat -- common/autotest_common.sh@969 -- # kill 60213 00:05:09.114 10:43:38 event.app_repeat -- common/autotest_common.sh@974 -- # wait 60213 00:05:09.681 spdk_app_start is called in Round 0. 00:05:09.681 Shutdown signal received, stop current app iteration 00:05:09.681 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:05:09.681 spdk_app_start is called in Round 1. 00:05:09.681 Shutdown signal received, stop current app iteration 00:05:09.681 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:05:09.681 spdk_app_start is called in Round 2. 00:05:09.681 Shutdown signal received, stop current app iteration 00:05:09.681 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:05:09.681 spdk_app_start is called in Round 3. 00:05:09.681 Shutdown signal received, stop current app iteration 00:05:09.681 10:43:39 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:09.681 10:43:39 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:09.681 00:05:09.681 real 0m19.246s 00:05:09.681 user 0m42.590s 00:05:09.681 sys 0m3.138s 00:05:09.681 10:43:39 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:09.681 10:43:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:09.681 ************************************ 00:05:09.681 END TEST app_repeat 00:05:09.681 ************************************ 00:05:09.681 10:43:39 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:09.681 10:43:39 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:09.681 10:43:39 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:09.681 10:43:39 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:09.681 10:43:39 event -- common/autotest_common.sh@10 -- # set +x 00:05:09.681 ************************************ 00:05:09.681 START TEST cpu_locks 00:05:09.681 ************************************ 00:05:09.681 10:43:39 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:09.681 * Looking for test storage... 00:05:09.681 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:09.681 10:43:39 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:09.681 10:43:39 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:09.681 10:43:39 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:09.681 10:43:39 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:09.681 10:43:39 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:09.681 10:43:39 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:09.681 10:43:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.681 ************************************ 00:05:09.681 START TEST default_locks 00:05:09.681 ************************************ 00:05:09.681 10:43:39 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:09.681 10:43:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60646 00:05:09.681 10:43:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:09.681 10:43:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60646 00:05:09.681 10:43:39 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 60646 ']' 00:05:09.681 10:43:39 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.681 10:43:39 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:09.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.681 10:43:39 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.681 10:43:39 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:09.681 10:43:39 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.681 [2024-07-25 10:43:39.335748] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:09.681 [2024-07-25 10:43:39.335841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60646 ] 00:05:09.939 [2024-07-25 10:43:39.468267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.939 [2024-07-25 10:43:39.616370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.197 [2024-07-25 10:43:39.694800] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:10.762 10:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:10.762 10:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:10.762 10:43:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60646 00:05:10.762 10:43:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60646 00:05:10.762 10:43:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:11.021 10:43:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60646 00:05:11.021 10:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 60646 ']' 00:05:11.021 10:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 60646 00:05:11.021 10:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:11.021 10:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:11.021 10:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60646 00:05:11.021 10:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:11.021 10:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:11.021 killing process with pid 60646 00:05:11.021 10:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60646' 00:05:11.021 10:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 60646 00:05:11.021 10:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 60646 00:05:11.587 10:43:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60646 00:05:11.587 10:43:41 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:11.587 10:43:41 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60646 00:05:11.587 10:43:41 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:11.587 10:43:41 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:11.587 10:43:41 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:11.587 10:43:41 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:11.588 10:43:41 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 60646 00:05:11.588 10:43:41 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 60646 ']' 00:05:11.588 10:43:41 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.588 10:43:41 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:11.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.588 10:43:41 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.588 10:43:41 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:11.588 10:43:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.588 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60646) - No such process 00:05:11.588 ERROR: process (pid: 60646) is no longer running 00:05:11.588 10:43:41 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:11.588 10:43:41 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:11.588 10:43:41 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:11.588 10:43:41 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:11.588 10:43:41 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:11.588 10:43:41 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:11.588 10:43:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:11.588 10:43:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:11.588 10:43:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:11.588 10:43:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:11.588 00:05:11.588 real 0m1.948s 00:05:11.588 user 0m2.000s 00:05:11.588 sys 0m0.596s 00:05:11.588 10:43:41 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.588 ************************************ 00:05:11.588 END TEST default_locks 00:05:11.588 ************************************ 00:05:11.588 10:43:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.588 10:43:41 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:11.588 10:43:41 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:11.588 10:43:41 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.588 10:43:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.588 ************************************ 00:05:11.588 START TEST default_locks_via_rpc 00:05:11.588 ************************************ 00:05:11.588 10:43:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:11.588 10:43:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60698 00:05:11.588 10:43:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60698 00:05:11.588 10:43:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60698 ']' 00:05:11.588 10:43:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:11.588 10:43:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.588 10:43:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:11.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.588 10:43:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.588 10:43:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:11.588 10:43:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.856 [2024-07-25 10:43:41.346823] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:11.856 [2024-07-25 10:43:41.346932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60698 ] 00:05:11.856 [2024-07-25 10:43:41.480490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.115 [2024-07-25 10:43:41.626174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.115 [2024-07-25 10:43:41.699842] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:12.680 10:43:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:12.680 10:43:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:12.680 10:43:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:12.680 10:43:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.680 10:43:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.680 10:43:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.680 10:43:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:12.681 10:43:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:12.681 10:43:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:12.681 10:43:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:12.681 10:43:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:12.681 10:43:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.681 10:43:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.681 10:43:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.681 10:43:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60698 00:05:12.681 10:43:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60698 00:05:12.681 10:43:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:13.248 10:43:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60698 00:05:13.248 10:43:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 60698 ']' 00:05:13.248 10:43:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 60698 00:05:13.248 10:43:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:13.248 10:43:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:13.248 10:43:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60698 00:05:13.248 10:43:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:13.248 10:43:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:13.248 killing process with pid 60698 00:05:13.248 10:43:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60698' 00:05:13.248 10:43:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 60698 00:05:13.248 10:43:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 60698 00:05:13.815 00:05:13.815 real 0m1.977s 00:05:13.815 user 0m2.009s 00:05:13.815 sys 0m0.601s 00:05:13.815 10:43:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.815 10:43:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.815 ************************************ 00:05:13.815 END TEST default_locks_via_rpc 00:05:13.815 ************************************ 00:05:13.815 10:43:43 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:13.815 10:43:43 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.815 10:43:43 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.815 10:43:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.815 ************************************ 00:05:13.815 START TEST non_locking_app_on_locked_coremask 00:05:13.815 ************************************ 00:05:13.815 10:43:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:13.815 10:43:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60749 00:05:13.815 10:43:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:13.815 10:43:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60749 /var/tmp/spdk.sock 00:05:13.815 10:43:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60749 ']' 00:05:13.815 10:43:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.815 10:43:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:13.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.815 10:43:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.815 10:43:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:13.815 10:43:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:13.815 [2024-07-25 10:43:43.364497] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:13.815 [2024-07-25 10:43:43.364560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60749 ] 00:05:13.815 [2024-07-25 10:43:43.500019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.073 [2024-07-25 10:43:43.635845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.073 [2024-07-25 10:43:43.711258] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:14.639 10:43:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:14.639 10:43:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:14.639 10:43:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:14.639 10:43:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60765 00:05:14.639 10:43:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60765 /var/tmp/spdk2.sock 00:05:14.639 10:43:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60765 ']' 00:05:14.639 10:43:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:14.640 10:43:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:14.640 10:43:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:14.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:14.640 10:43:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:14.640 10:43:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.897 [2024-07-25 10:43:44.417768] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:14.897 [2024-07-25 10:43:44.417869] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60765 ] 00:05:14.897 [2024-07-25 10:43:44.554490] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:14.897 [2024-07-25 10:43:44.554533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.156 [2024-07-25 10:43:44.839546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.414 [2024-07-25 10:43:44.988269] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:15.980 10:43:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:15.980 10:43:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:15.980 10:43:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60749 00:05:15.980 10:43:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60749 00:05:15.980 10:43:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:16.545 10:43:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60749 00:05:16.545 10:43:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60749 ']' 00:05:16.545 10:43:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60749 00:05:16.545 10:43:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:16.545 10:43:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:16.545 10:43:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60749 00:05:16.545 10:43:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:16.545 10:43:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:16.546 killing process with pid 60749 00:05:16.546 10:43:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60749' 00:05:16.546 10:43:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60749 00:05:16.546 10:43:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60749 00:05:17.919 10:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60765 00:05:17.919 10:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60765 ']' 00:05:17.919 10:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60765 00:05:17.919 10:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:17.919 10:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:17.919 10:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60765 00:05:17.919 10:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:17.919 10:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:17.919 killing process with pid 60765 00:05:17.919 10:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60765' 00:05:17.919 10:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60765 00:05:17.919 10:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60765 00:05:18.177 00:05:18.177 real 0m4.481s 00:05:18.177 user 0m4.784s 00:05:18.177 sys 0m1.215s 00:05:18.177 10:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.177 ************************************ 00:05:18.177 END TEST non_locking_app_on_locked_coremask 00:05:18.177 ************************************ 00:05:18.177 10:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.177 10:43:47 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:18.177 10:43:47 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.177 10:43:47 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.177 10:43:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.177 ************************************ 00:05:18.177 START TEST locking_app_on_unlocked_coremask 00:05:18.177 ************************************ 00:05:18.177 10:43:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:18.177 10:43:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60842 00:05:18.177 10:43:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60842 /var/tmp/spdk.sock 00:05:18.177 10:43:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:18.177 10:43:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60842 ']' 00:05:18.177 10:43:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.177 10:43:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:18.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.177 10:43:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.177 10:43:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:18.177 10:43:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.177 [2024-07-25 10:43:47.906034] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:18.177 [2024-07-25 10:43:47.906136] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60842 ] 00:05:18.435 [2024-07-25 10:43:48.043596] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:18.435 [2024-07-25 10:43:48.043627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.692 [2024-07-25 10:43:48.175117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.692 [2024-07-25 10:43:48.245903] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:19.258 10:43:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:19.258 10:43:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:19.258 10:43:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:19.258 10:43:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60859 00:05:19.258 10:43:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60859 /var/tmp/spdk2.sock 00:05:19.258 10:43:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60859 ']' 00:05:19.258 10:43:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:19.258 10:43:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:19.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:19.258 10:43:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:19.258 10:43:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:19.258 10:43:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:19.258 [2024-07-25 10:43:48.943300] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:19.258 [2024-07-25 10:43:48.943392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60859 ] 00:05:19.516 [2024-07-25 10:43:49.086013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.775 [2024-07-25 10:43:49.368348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.775 [2024-07-25 10:43:49.510505] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:20.341 10:43:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:20.341 10:43:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:20.341 10:43:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60859 00:05:20.341 10:43:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60859 00:05:20.341 10:43:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:21.276 10:43:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60842 00:05:21.276 10:43:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60842 ']' 00:05:21.276 10:43:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 60842 00:05:21.276 10:43:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:21.276 10:43:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:21.276 10:43:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60842 00:05:21.276 10:43:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:21.276 10:43:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:21.276 killing process with pid 60842 00:05:21.276 10:43:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60842' 00:05:21.276 10:43:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 60842 00:05:21.276 10:43:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 60842 00:05:22.239 10:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60859 00:05:22.239 10:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60859 ']' 00:05:22.239 10:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 60859 00:05:22.239 10:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:22.239 10:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:22.239 10:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60859 00:05:22.239 10:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:22.239 10:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:22.239 killing process with pid 60859 00:05:22.239 10:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60859' 00:05:22.239 10:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 60859 00:05:22.239 10:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 60859 00:05:22.806 00:05:22.806 real 0m4.646s 00:05:22.806 user 0m4.971s 00:05:22.806 sys 0m1.264s 00:05:22.806 10:43:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:22.806 ************************************ 00:05:22.806 END TEST locking_app_on_unlocked_coremask 00:05:22.806 ************************************ 00:05:22.806 10:43:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.806 10:43:52 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:22.806 10:43:52 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:22.806 10:43:52 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.806 10:43:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.806 ************************************ 00:05:22.806 START TEST locking_app_on_locked_coremask 00:05:22.806 ************************************ 00:05:22.806 10:43:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:22.806 10:43:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60926 00:05:22.806 10:43:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60926 /var/tmp/spdk.sock 00:05:22.806 10:43:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60926 ']' 00:05:22.806 10:43:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.806 10:43:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:22.806 10:43:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:22.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.806 10:43:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.806 10:43:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:22.806 10:43:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.064 [2024-07-25 10:43:52.631002] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:23.064 [2024-07-25 10:43:52.631151] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60926 ] 00:05:23.064 [2024-07-25 10:43:52.769947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.322 [2024-07-25 10:43:52.898533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.322 [2024-07-25 10:43:52.968898] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:23.887 10:43:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:23.887 10:43:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:23.887 10:43:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60942 00:05:23.887 10:43:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:23.887 10:43:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60942 /var/tmp/spdk2.sock 00:05:23.887 10:43:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:23.887 10:43:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60942 /var/tmp/spdk2.sock 00:05:23.887 10:43:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:23.887 10:43:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:23.887 10:43:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:23.887 10:43:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:23.887 10:43:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60942 /var/tmp/spdk2.sock 00:05:23.887 10:43:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60942 ']' 00:05:23.887 10:43:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:23.887 10:43:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:23.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:23.887 10:43:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:23.887 10:43:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:23.887 10:43:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.144 [2024-07-25 10:43:53.649724] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:24.144 [2024-07-25 10:43:53.649833] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60942 ] 00:05:24.144 [2024-07-25 10:43:53.793563] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60926 has claimed it. 00:05:24.144 [2024-07-25 10:43:53.793629] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:24.711 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60942) - No such process 00:05:24.711 ERROR: process (pid: 60942) is no longer running 00:05:24.711 10:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:24.711 10:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:24.711 10:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:24.711 10:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:24.711 10:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:24.711 10:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:24.711 10:43:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60926 00:05:24.711 10:43:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60926 00:05:24.711 10:43:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:25.278 10:43:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60926 00:05:25.278 10:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60926 ']' 00:05:25.278 10:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60926 00:05:25.278 10:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:25.278 10:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:25.278 10:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60926 00:05:25.278 killing process with pid 60926 00:05:25.278 10:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:25.278 10:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:25.278 10:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60926' 00:05:25.278 10:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60926 00:05:25.278 10:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60926 00:05:25.847 00:05:25.847 real 0m2.817s 00:05:25.847 user 0m3.159s 00:05:25.847 sys 0m0.725s 00:05:25.847 10:43:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.847 ************************************ 00:05:25.847 END TEST locking_app_on_locked_coremask 00:05:25.847 ************************************ 00:05:25.847 10:43:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.847 10:43:55 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:25.847 10:43:55 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:25.847 10:43:55 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.847 10:43:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.847 ************************************ 00:05:25.847 START TEST locking_overlapped_coremask 00:05:25.847 ************************************ 00:05:25.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.847 10:43:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:05:25.847 10:43:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60993 00:05:25.847 10:43:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:25.847 10:43:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60993 /var/tmp/spdk.sock 00:05:25.847 10:43:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 60993 ']' 00:05:25.847 10:43:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.847 10:43:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:25.847 10:43:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.847 10:43:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:25.847 10:43:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.847 [2024-07-25 10:43:55.503539] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:25.847 [2024-07-25 10:43:55.503648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60993 ] 00:05:26.106 [2024-07-25 10:43:55.638832] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:26.106 [2024-07-25 10:43:55.780501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.106 [2024-07-25 10:43:55.780610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:26.106 [2024-07-25 10:43:55.780623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.364 [2024-07-25 10:43:55.853163] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:26.931 10:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:26.931 10:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:26.931 10:43:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61011 00:05:26.931 10:43:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:26.931 10:43:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61011 /var/tmp/spdk2.sock 00:05:26.931 10:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:26.931 10:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 61011 /var/tmp/spdk2.sock 00:05:26.931 10:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:26.931 10:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:26.931 10:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:26.931 10:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:26.931 10:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 61011 /var/tmp/spdk2.sock 00:05:26.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:26.931 10:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 61011 ']' 00:05:26.931 10:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:26.931 10:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:26.931 10:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:26.931 10:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:26.931 10:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.931 [2024-07-25 10:43:56.496738] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:26.931 [2024-07-25 10:43:56.496832] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61011 ] 00:05:26.931 [2024-07-25 10:43:56.637113] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60993 has claimed it. 00:05:26.931 [2024-07-25 10:43:56.637196] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:27.498 ERROR: process (pid: 61011) is no longer running 00:05:27.498 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (61011) - No such process 00:05:27.498 10:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:27.498 10:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:27.498 10:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:27.498 10:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:27.498 10:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:27.498 10:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:27.498 10:43:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:27.498 10:43:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:27.498 10:43:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:27.498 10:43:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:27.498 10:43:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60993 00:05:27.498 10:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 60993 ']' 00:05:27.498 10:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 60993 00:05:27.498 10:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:05:27.498 10:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:27.498 10:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60993 00:05:27.757 killing process with pid 60993 00:05:27.757 10:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:27.757 10:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:27.757 10:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60993' 00:05:27.757 10:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 60993 00:05:27.757 10:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 60993 00:05:28.324 ************************************ 00:05:28.324 END TEST locking_overlapped_coremask 00:05:28.324 ************************************ 00:05:28.324 00:05:28.324 real 0m2.408s 00:05:28.324 user 0m6.442s 00:05:28.324 sys 0m0.514s 00:05:28.324 10:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.324 10:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.324 10:43:57 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:28.324 10:43:57 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:28.324 10:43:57 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.324 10:43:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.324 ************************************ 00:05:28.324 START TEST locking_overlapped_coremask_via_rpc 00:05:28.324 ************************************ 00:05:28.324 10:43:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:05:28.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.324 10:43:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61051 00:05:28.324 10:43:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61051 /var/tmp/spdk.sock 00:05:28.324 10:43:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61051 ']' 00:05:28.324 10:43:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:28.324 10:43:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.324 10:43:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:28.324 10:43:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.324 10:43:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:28.324 10:43:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.324 [2024-07-25 10:43:57.930813] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:28.324 [2024-07-25 10:43:57.930932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61051 ] 00:05:28.583 [2024-07-25 10:43:58.063514] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:28.583 [2024-07-25 10:43:58.063572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:28.583 [2024-07-25 10:43:58.208626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.583 [2024-07-25 10:43:58.208776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.583 [2024-07-25 10:43:58.208778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.583 [2024-07-25 10:43:58.282910] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:29.518 10:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:29.518 10:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:29.518 10:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:29.518 10:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61075 00:05:29.518 10:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61075 /var/tmp/spdk2.sock 00:05:29.518 10:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61075 ']' 00:05:29.518 10:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:29.518 10:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:29.518 10:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:29.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:29.518 10:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:29.518 10:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.518 [2024-07-25 10:43:59.007708] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:29.518 [2024-07-25 10:43:59.007997] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61075 ] 00:05:29.518 [2024-07-25 10:43:59.150830] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:29.518 [2024-07-25 10:43:59.150898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:29.777 [2024-07-25 10:43:59.439050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:29.777 [2024-07-25 10:43:59.443038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:29.777 [2024-07-25 10:43:59.443041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:30.034 [2024-07-25 10:43:59.585278] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:30.602 10:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:30.602 10:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:30.602 10:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:30.602 10:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.602 10:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.602 10:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.602 10:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:30.602 10:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:30.602 10:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:30.602 10:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:30.602 10:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.602 10:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:30.602 10:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.602 10:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:30.602 10:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.602 10:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.602 [2024-07-25 10:44:00.074074] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61051 has claimed it. 00:05:30.602 request: 00:05:30.602 { 00:05:30.602 "method": "framework_enable_cpumask_locks", 00:05:30.602 "req_id": 1 00:05:30.602 } 00:05:30.602 Got JSON-RPC error response 00:05:30.602 response: 00:05:30.602 { 00:05:30.602 "code": -32603, 00:05:30.602 "message": "Failed to claim CPU core: 2" 00:05:30.602 } 00:05:30.602 10:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:30.602 10:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:30.602 10:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:30.602 10:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:30.602 10:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:30.602 10:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61051 /var/tmp/spdk.sock 00:05:30.602 10:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61051 ']' 00:05:30.602 10:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.602 10:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:30.602 10:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.602 10:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:30.602 10:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.861 10:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:30.861 10:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:30.861 10:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61075 /var/tmp/spdk2.sock 00:05:30.861 10:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61075 ']' 00:05:30.861 10:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:30.861 10:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:30.861 10:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:30.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:30.861 10:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:30.861 10:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.120 10:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:31.120 10:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:31.120 10:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:31.120 10:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:31.120 10:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:31.120 10:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:31.120 ************************************ 00:05:31.120 END TEST locking_overlapped_coremask_via_rpc 00:05:31.120 ************************************ 00:05:31.120 00:05:31.120 real 0m2.808s 00:05:31.120 user 0m1.434s 00:05:31.120 sys 0m0.211s 00:05:31.120 10:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.120 10:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.120 10:44:00 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:31.120 10:44:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61051 ]] 00:05:31.120 10:44:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61051 00:05:31.120 10:44:00 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 61051 ']' 00:05:31.120 10:44:00 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 61051 00:05:31.120 10:44:00 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:31.120 10:44:00 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:31.120 10:44:00 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61051 00:05:31.120 killing process with pid 61051 00:05:31.120 10:44:00 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:31.120 10:44:00 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:31.120 10:44:00 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61051' 00:05:31.120 10:44:00 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 61051 00:05:31.120 10:44:00 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 61051 00:05:31.688 10:44:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61075 ]] 00:05:31.688 10:44:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61075 00:05:31.688 10:44:01 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 61075 ']' 00:05:31.688 10:44:01 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 61075 00:05:31.688 10:44:01 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:31.688 10:44:01 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:31.688 10:44:01 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61075 00:05:31.688 killing process with pid 61075 00:05:31.688 10:44:01 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:31.688 10:44:01 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:31.688 10:44:01 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61075' 00:05:31.688 10:44:01 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 61075 00:05:31.688 10:44:01 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 61075 00:05:32.254 10:44:01 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:32.254 Process with pid 61051 is not found 00:05:32.254 10:44:01 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:32.254 10:44:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61051 ]] 00:05:32.254 10:44:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61051 00:05:32.254 10:44:01 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 61051 ']' 00:05:32.254 10:44:01 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 61051 00:05:32.254 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (61051) - No such process 00:05:32.254 10:44:01 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 61051 is not found' 00:05:32.254 10:44:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61075 ]] 00:05:32.254 10:44:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61075 00:05:32.254 10:44:01 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 61075 ']' 00:05:32.254 10:44:01 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 61075 00:05:32.254 Process with pid 61075 is not found 00:05:32.254 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (61075) - No such process 00:05:32.254 10:44:01 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 61075 is not found' 00:05:32.254 10:44:01 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:32.254 00:05:32.254 real 0m22.612s 00:05:32.254 user 0m38.514s 00:05:32.254 sys 0m6.140s 00:05:32.254 ************************************ 00:05:32.254 END TEST cpu_locks 00:05:32.254 ************************************ 00:05:32.254 10:44:01 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.254 10:44:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.254 00:05:32.254 real 0m51.142s 00:05:32.254 user 1m36.376s 00:05:32.254 sys 0m10.165s 00:05:32.254 10:44:01 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.254 10:44:01 event -- common/autotest_common.sh@10 -- # set +x 00:05:32.255 ************************************ 00:05:32.255 END TEST event 00:05:32.255 ************************************ 00:05:32.255 10:44:01 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:32.255 10:44:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.255 10:44:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.255 10:44:01 -- common/autotest_common.sh@10 -- # set +x 00:05:32.255 ************************************ 00:05:32.255 START TEST thread 00:05:32.255 ************************************ 00:05:32.255 10:44:01 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:32.255 * Looking for test storage... 00:05:32.255 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:32.255 10:44:01 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:32.255 10:44:01 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:32.255 10:44:01 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.255 10:44:01 thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.513 ************************************ 00:05:32.513 START TEST thread_poller_perf 00:05:32.513 ************************************ 00:05:32.513 10:44:01 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:32.513 [2024-07-25 10:44:02.010166] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:32.513 [2024-07-25 10:44:02.010305] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61202 ] 00:05:32.513 [2024-07-25 10:44:02.153566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.772 [2024-07-25 10:44:02.273366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.772 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:33.709 ====================================== 00:05:33.709 busy:2208002763 (cyc) 00:05:33.709 total_run_count: 324000 00:05:33.709 tsc_hz: 2200000000 (cyc) 00:05:33.709 ====================================== 00:05:33.709 poller_cost: 6814 (cyc), 3097 (nsec) 00:05:33.709 00:05:33.709 real 0m1.397s 00:05:33.709 user 0m1.220s 00:05:33.709 sys 0m0.065s 00:05:33.709 10:44:03 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.709 10:44:03 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:33.709 ************************************ 00:05:33.709 END TEST thread_poller_perf 00:05:33.709 ************************************ 00:05:33.709 10:44:03 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:33.709 10:44:03 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:33.709 10:44:03 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.709 10:44:03 thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.966 ************************************ 00:05:33.966 START TEST thread_poller_perf 00:05:33.966 ************************************ 00:05:33.966 10:44:03 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:33.966 [2024-07-25 10:44:03.470060] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:33.966 [2024-07-25 10:44:03.470459] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61233 ] 00:05:33.966 [2024-07-25 10:44:03.606661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.224 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:34.224 [2024-07-25 10:44:03.728081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.157 ====================================== 00:05:35.157 busy:2202020574 (cyc) 00:05:35.157 total_run_count: 4362000 00:05:35.157 tsc_hz: 2200000000 (cyc) 00:05:35.157 ====================================== 00:05:35.157 poller_cost: 504 (cyc), 229 (nsec) 00:05:35.157 ************************************ 00:05:35.157 END TEST thread_poller_perf 00:05:35.157 ************************************ 00:05:35.157 00:05:35.157 real 0m1.405s 00:05:35.157 user 0m1.225s 00:05:35.157 sys 0m0.070s 00:05:35.157 10:44:04 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.157 10:44:04 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:35.416 10:44:04 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:35.416 ************************************ 00:05:35.416 END TEST thread 00:05:35.416 ************************************ 00:05:35.416 00:05:35.416 real 0m3.002s 00:05:35.416 user 0m2.510s 00:05:35.416 sys 0m0.264s 00:05:35.416 10:44:04 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.416 10:44:04 thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.416 10:44:04 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:05:35.416 10:44:04 -- spdk/autotest.sh@189 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:35.416 10:44:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:35.416 10:44:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.416 10:44:04 -- common/autotest_common.sh@10 -- # set +x 00:05:35.416 ************************************ 00:05:35.416 START TEST app_cmdline 00:05:35.416 ************************************ 00:05:35.416 10:44:04 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:35.416 * Looking for test storage... 00:05:35.416 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:35.416 10:44:05 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:35.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.416 10:44:05 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61307 00:05:35.416 10:44:05 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61307 00:05:35.416 10:44:05 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 61307 ']' 00:05:35.416 10:44:05 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:35.416 10:44:05 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.416 10:44:05 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:35.416 10:44:05 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.416 10:44:05 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:35.416 10:44:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:35.416 [2024-07-25 10:44:05.105279] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:35.416 [2024-07-25 10:44:05.105394] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61307 ] 00:05:35.674 [2024-07-25 10:44:05.239598] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.674 [2024-07-25 10:44:05.391652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.932 [2024-07-25 10:44:05.460052] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:36.500 10:44:06 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:36.500 10:44:06 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:05:36.500 10:44:06 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:36.759 { 00:05:36.759 "version": "SPDK v24.09-pre git sha1 704257090", 00:05:36.759 "fields": { 00:05:36.759 "major": 24, 00:05:36.759 "minor": 9, 00:05:36.759 "patch": 0, 00:05:36.759 "suffix": "-pre", 00:05:36.759 "commit": "704257090" 00:05:36.759 } 00:05:36.759 } 00:05:36.759 10:44:06 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:36.759 10:44:06 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:36.759 10:44:06 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:36.759 10:44:06 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:36.759 10:44:06 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:36.759 10:44:06 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:36.759 10:44:06 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.759 10:44:06 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:36.759 10:44:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:36.759 10:44:06 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.759 10:44:06 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:36.759 10:44:06 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:36.759 10:44:06 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:36.759 10:44:06 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:36.759 10:44:06 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:36.759 10:44:06 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:36.759 10:44:06 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:36.759 10:44:06 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:36.759 10:44:06 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:36.759 10:44:06 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:36.759 10:44:06 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:36.759 10:44:06 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:36.759 10:44:06 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:36.759 10:44:06 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:37.327 request: 00:05:37.327 { 00:05:37.327 "method": "env_dpdk_get_mem_stats", 00:05:37.327 "req_id": 1 00:05:37.327 } 00:05:37.327 Got JSON-RPC error response 00:05:37.327 response: 00:05:37.327 { 00:05:37.327 "code": -32601, 00:05:37.327 "message": "Method not found" 00:05:37.327 } 00:05:37.327 10:44:06 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:37.327 10:44:06 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:37.327 10:44:06 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:37.327 10:44:06 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:37.327 10:44:06 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61307 00:05:37.327 10:44:06 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 61307 ']' 00:05:37.327 10:44:06 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 61307 00:05:37.327 10:44:06 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:05:37.327 10:44:06 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:37.327 10:44:06 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61307 00:05:37.327 killing process with pid 61307 00:05:37.327 10:44:06 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:37.327 10:44:06 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:37.327 10:44:06 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61307' 00:05:37.327 10:44:06 app_cmdline -- common/autotest_common.sh@969 -- # kill 61307 00:05:37.327 10:44:06 app_cmdline -- common/autotest_common.sh@974 -- # wait 61307 00:05:37.587 ************************************ 00:05:37.587 END TEST app_cmdline 00:05:37.587 ************************************ 00:05:37.587 00:05:37.587 real 0m2.282s 00:05:37.587 user 0m2.847s 00:05:37.587 sys 0m0.549s 00:05:37.587 10:44:07 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:37.587 10:44:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:37.587 10:44:07 -- spdk/autotest.sh@190 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:37.587 10:44:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:37.587 10:44:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.587 10:44:07 -- common/autotest_common.sh@10 -- # set +x 00:05:37.587 ************************************ 00:05:37.587 START TEST version 00:05:37.587 ************************************ 00:05:37.587 10:44:07 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:37.847 * Looking for test storage... 00:05:37.847 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:37.847 10:44:07 version -- app/version.sh@17 -- # get_header_version major 00:05:37.847 10:44:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:37.847 10:44:07 version -- app/version.sh@14 -- # tr -d '"' 00:05:37.847 10:44:07 version -- app/version.sh@14 -- # cut -f2 00:05:37.847 10:44:07 version -- app/version.sh@17 -- # major=24 00:05:37.847 10:44:07 version -- app/version.sh@18 -- # get_header_version minor 00:05:37.847 10:44:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:37.847 10:44:07 version -- app/version.sh@14 -- # cut -f2 00:05:37.847 10:44:07 version -- app/version.sh@14 -- # tr -d '"' 00:05:37.848 10:44:07 version -- app/version.sh@18 -- # minor=9 00:05:37.848 10:44:07 version -- app/version.sh@19 -- # get_header_version patch 00:05:37.848 10:44:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:37.848 10:44:07 version -- app/version.sh@14 -- # cut -f2 00:05:37.848 10:44:07 version -- app/version.sh@14 -- # tr -d '"' 00:05:37.848 10:44:07 version -- app/version.sh@19 -- # patch=0 00:05:37.848 10:44:07 version -- app/version.sh@20 -- # get_header_version suffix 00:05:37.848 10:44:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:37.848 10:44:07 version -- app/version.sh@14 -- # cut -f2 00:05:37.848 10:44:07 version -- app/version.sh@14 -- # tr -d '"' 00:05:37.848 10:44:07 version -- app/version.sh@20 -- # suffix=-pre 00:05:37.848 10:44:07 version -- app/version.sh@22 -- # version=24.9 00:05:37.848 10:44:07 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:37.848 10:44:07 version -- app/version.sh@28 -- # version=24.9rc0 00:05:37.848 10:44:07 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:37.848 10:44:07 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:37.848 10:44:07 version -- app/version.sh@30 -- # py_version=24.9rc0 00:05:37.848 10:44:07 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:05:37.848 00:05:37.848 real 0m0.149s 00:05:37.848 user 0m0.067s 00:05:37.848 sys 0m0.113s 00:05:37.848 ************************************ 00:05:37.848 END TEST version 00:05:37.848 ************************************ 00:05:37.848 10:44:07 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:37.848 10:44:07 version -- common/autotest_common.sh@10 -- # set +x 00:05:37.848 10:44:07 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:05:37.848 10:44:07 -- spdk/autotest.sh@202 -- # uname -s 00:05:37.848 10:44:07 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:05:37.848 10:44:07 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:05:37.848 10:44:07 -- spdk/autotest.sh@203 -- # [[ 1 -eq 1 ]] 00:05:37.848 10:44:07 -- spdk/autotest.sh@209 -- # [[ 0 -eq 0 ]] 00:05:37.848 10:44:07 -- spdk/autotest.sh@210 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:37.848 10:44:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:37.848 10:44:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.848 10:44:07 -- common/autotest_common.sh@10 -- # set +x 00:05:37.848 ************************************ 00:05:37.848 START TEST spdk_dd 00:05:37.848 ************************************ 00:05:37.848 10:44:07 spdk_dd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:37.848 * Looking for test storage... 00:05:37.848 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:37.848 10:44:07 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:37.848 10:44:07 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:37.848 10:44:07 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:37.848 10:44:07 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:37.848 10:44:07 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.848 10:44:07 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.848 10:44:07 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.848 10:44:07 spdk_dd -- paths/export.sh@5 -- # export PATH 00:05:37.848 10:44:07 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.848 10:44:07 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:38.417 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:38.417 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:38.417 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:38.417 10:44:07 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:05:38.417 10:44:07 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:05:38.417 10:44:07 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:05:38.417 10:44:07 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:05:38.417 10:44:07 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:05:38.417 10:44:07 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:05:38.417 10:44:07 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:05:38.417 10:44:07 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:05:38.417 10:44:07 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:05:38.417 10:44:07 spdk_dd -- scripts/common.sh@230 -- # local class 00:05:38.417 10:44:07 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:05:38.417 10:44:07 spdk_dd -- scripts/common.sh@232 -- # local progif 00:05:38.417 10:44:07 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:05:38.417 10:44:07 spdk_dd -- scripts/common.sh@233 -- # class=01 00:05:38.417 10:44:07 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:05:38.417 10:44:07 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:05:38.417 10:44:07 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:05:38.417 10:44:07 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:05:38.417 10:44:07 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:05:38.417 10:44:07 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:05:38.417 10:44:07 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:05:38.417 10:44:07 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:05:38.417 10:44:07 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:05:38.417 10:44:07 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:05:38.417 10:44:07 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:38.417 10:44:07 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:05:38.417 10:44:07 spdk_dd -- scripts/common.sh@15 -- # local i 00:05:38.417 10:44:07 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:05:38.417 10:44:07 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:05:38.417 10:44:07 spdk_dd -- scripts/common.sh@24 -- # return 0 00:05:38.417 10:44:07 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:05:38.417 10:44:07 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:38.417 10:44:07 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:05:38.417 10:44:07 spdk_dd -- scripts/common.sh@15 -- # local i 00:05:38.417 10:44:07 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:05:38.417 10:44:07 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:05:38.417 10:44:07 spdk_dd -- scripts/common.sh@24 -- # return 0 00:05:38.417 10:44:07 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:05:38.417 10:44:07 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:05:38.417 10:44:07 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:05:38.417 10:44:07 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:05:38.417 10:44:08 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:05:38.417 10:44:08 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:05:38.417 10:44:08 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:05:38.417 10:44:08 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:05:38.417 10:44:08 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:05:38.417 10:44:08 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:05:38.417 10:44:08 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:05:38.417 10:44:08 spdk_dd -- scripts/common.sh@325 -- # (( 2 )) 00:05:38.417 10:44:08 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:38.417 10:44:08 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:05:38.417 10:44:08 spdk_dd -- dd/common.sh@139 -- # local lib 00:05:38.417 10:44:08 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:05:38.417 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.417 10:44:08 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:38.417 10:44:08 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:05:38.417 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:05:38.417 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.417 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:05:38.417 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.417 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:05:38.417 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.417 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:05:38.417 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.417 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:05:38.417 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.417 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:05:38.417 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.417 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:05:38.417 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.417 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:05:38.417 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.417 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:05:38.417 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.1 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.16.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.418 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:05:38.419 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.419 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:05:38.419 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.419 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:05:38.419 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.419 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:05:38.419 10:44:08 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:38.419 10:44:08 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:05:38.419 10:44:08 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:05:38.419 * spdk_dd linked to liburing 00:05:38.419 10:44:08 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:38.419 10:44:08 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:05:38.419 10:44:08 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=y 00:05:38.419 10:44:08 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:05:38.419 10:44:08 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:05:38.419 10:44:08 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:05:38.419 10:44:08 spdk_dd -- dd/common.sh@153 -- # return 0 00:05:38.419 10:44:08 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:05:38.419 10:44:08 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:05:38.419 10:44:08 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:38.419 10:44:08 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.419 10:44:08 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:05:38.419 ************************************ 00:05:38.419 START TEST spdk_dd_basic_rw 00:05:38.419 ************************************ 00:05:38.419 10:44:08 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:05:38.680 * Looking for test storage... 00:05:38.680 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:38.680 10:44:08 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:38.680 10:44:08 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:38.680 10:44:08 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:38.680 10:44:08 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:38.680 10:44:08 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.680 10:44:08 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.680 10:44:08 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.680 10:44:08 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:05:38.680 10:44:08 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.680 10:44:08 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:05:38.680 10:44:08 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:05:38.680 10:44:08 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:05:38.680 10:44:08 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:05:38.680 10:44:08 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:05:38.680 10:44:08 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:05:38.680 10:44:08 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:05:38.680 10:44:08 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:38.680 10:44:08 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:38.680 10:44:08 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:05:38.680 10:44:08 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:05:38.680 10:44:08 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:05:38.680 10:44:08 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:05:38.681 10:44:08 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:05:38.681 10:44:08 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:05:38.682 10:44:08 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:05:38.682 10:44:08 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:05:38.682 10:44:08 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:05:38.682 10:44:08 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:05:38.682 10:44:08 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:05:38.682 10:44:08 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:38.682 10:44:08 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:05:38.682 10:44:08 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:38.682 10:44:08 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:38.682 10:44:08 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:38.682 10:44:08 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.682 10:44:08 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:38.682 ************************************ 00:05:38.682 START TEST dd_bs_lt_native_bs 00:05:38.682 ************************************ 00:05:38.682 10:44:08 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1125 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:38.682 10:44:08 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # local es=0 00:05:38.682 10:44:08 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:38.682 10:44:08 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:38.682 10:44:08 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:38.682 10:44:08 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:38.682 10:44:08 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:38.682 10:44:08 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:38.682 10:44:08 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:38.682 10:44:08 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:38.682 10:44:08 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:38.682 10:44:08 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:38.941 { 00:05:38.941 "subsystems": [ 00:05:38.941 { 00:05:38.941 "subsystem": "bdev", 00:05:38.941 "config": [ 00:05:38.941 { 00:05:38.941 "params": { 00:05:38.941 "trtype": "pcie", 00:05:38.941 "traddr": "0000:00:10.0", 00:05:38.941 "name": "Nvme0" 00:05:38.941 }, 00:05:38.941 "method": "bdev_nvme_attach_controller" 00:05:38.941 }, 00:05:38.941 { 00:05:38.941 "method": "bdev_wait_for_examine" 00:05:38.941 } 00:05:38.941 ] 00:05:38.941 } 00:05:38.941 ] 00:05:38.941 } 00:05:38.941 [2024-07-25 10:44:08.460185] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:38.941 [2024-07-25 10:44:08.460568] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61635 ] 00:05:38.941 [2024-07-25 10:44:08.602614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.199 [2024-07-25 10:44:08.764201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.199 [2024-07-25 10:44:08.840906] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:39.470 [2024-07-25 10:44:08.957700] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:05:39.470 [2024-07-25 10:44:08.957763] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:39.470 [2024-07-25 10:44:09.141791] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:39.730 10:44:09 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # es=234 00:05:39.730 10:44:09 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:39.730 10:44:09 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@662 -- # es=106 00:05:39.730 10:44:09 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # case "$es" in 00:05:39.730 10:44:09 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@670 -- # es=1 00:05:39.730 10:44:09 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:39.730 00:05:39.730 real 0m0.875s 00:05:39.730 user 0m0.607s 00:05:39.730 sys 0m0.209s 00:05:39.730 10:44:09 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.730 ************************************ 00:05:39.730 END TEST dd_bs_lt_native_bs 00:05:39.730 ************************************ 00:05:39.730 10:44:09 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:05:39.730 10:44:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:05:39.730 10:44:09 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:39.730 10:44:09 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.730 10:44:09 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:39.730 ************************************ 00:05:39.730 START TEST dd_rw 00:05:39.730 ************************************ 00:05:39.730 10:44:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1125 -- # basic_rw 4096 00:05:39.730 10:44:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:05:39.730 10:44:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:05:39.730 10:44:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:05:39.730 10:44:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:05:39.730 10:44:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:39.730 10:44:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:39.730 10:44:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:39.730 10:44:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:39.730 10:44:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:39.730 10:44:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:39.730 10:44:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:39.730 10:44:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:39.730 10:44:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:05:39.730 10:44:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:05:39.730 10:44:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:05:39.730 10:44:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:05:39.730 10:44:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:39.730 10:44:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:40.297 10:44:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:05:40.297 10:44:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:40.297 10:44:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:40.297 10:44:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:40.297 { 00:05:40.297 "subsystems": [ 00:05:40.297 { 00:05:40.297 "subsystem": "bdev", 00:05:40.297 "config": [ 00:05:40.297 { 00:05:40.297 "params": { 00:05:40.297 "trtype": "pcie", 00:05:40.297 "traddr": "0000:00:10.0", 00:05:40.297 "name": "Nvme0" 00:05:40.297 }, 00:05:40.297 "method": "bdev_nvme_attach_controller" 00:05:40.297 }, 00:05:40.297 { 00:05:40.297 "method": "bdev_wait_for_examine" 00:05:40.297 } 00:05:40.297 ] 00:05:40.297 } 00:05:40.297 ] 00:05:40.297 } 00:05:40.297 [2024-07-25 10:44:10.003102] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:40.297 [2024-07-25 10:44:10.003245] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61671 ] 00:05:40.555 [2024-07-25 10:44:10.152523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.555 [2024-07-25 10:44:10.284670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.814 [2024-07-25 10:44:10.358317] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:41.072  Copying: 60/60 [kB] (average 29 MBps) 00:05:41.072 00:05:41.072 10:44:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:41.072 10:44:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:05:41.072 10:44:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:41.072 10:44:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:41.330 { 00:05:41.330 "subsystems": [ 00:05:41.330 { 00:05:41.330 "subsystem": "bdev", 00:05:41.330 "config": [ 00:05:41.330 { 00:05:41.330 "params": { 00:05:41.330 "trtype": "pcie", 00:05:41.330 "traddr": "0000:00:10.0", 00:05:41.330 "name": "Nvme0" 00:05:41.330 }, 00:05:41.330 "method": "bdev_nvme_attach_controller" 00:05:41.330 }, 00:05:41.330 { 00:05:41.330 "method": "bdev_wait_for_examine" 00:05:41.330 } 00:05:41.330 ] 00:05:41.330 } 00:05:41.330 ] 00:05:41.330 } 00:05:41.330 [2024-07-25 10:44:10.833103] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:41.330 [2024-07-25 10:44:10.833203] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61685 ] 00:05:41.330 [2024-07-25 10:44:10.972388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.589 [2024-07-25 10:44:11.103247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.589 [2024-07-25 10:44:11.178745] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:42.155  Copying: 60/60 [kB] (average 14 MBps) 00:05:42.155 00:05:42.155 10:44:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:42.155 10:44:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:05:42.155 10:44:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:42.155 10:44:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:42.155 10:44:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:05:42.155 10:44:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:42.155 10:44:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:42.155 10:44:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:42.155 10:44:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:42.155 10:44:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:42.155 10:44:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:42.155 { 00:05:42.155 "subsystems": [ 00:05:42.155 { 00:05:42.155 "subsystem": "bdev", 00:05:42.155 "config": [ 00:05:42.155 { 00:05:42.155 "params": { 00:05:42.155 "trtype": "pcie", 00:05:42.155 "traddr": "0000:00:10.0", 00:05:42.155 "name": "Nvme0" 00:05:42.155 }, 00:05:42.155 "method": "bdev_nvme_attach_controller" 00:05:42.155 }, 00:05:42.155 { 00:05:42.155 "method": "bdev_wait_for_examine" 00:05:42.155 } 00:05:42.155 ] 00:05:42.155 } 00:05:42.155 ] 00:05:42.155 } 00:05:42.155 [2024-07-25 10:44:11.659944] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:42.155 [2024-07-25 10:44:11.660221] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61706 ] 00:05:42.155 [2024-07-25 10:44:11.800564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.413 [2024-07-25 10:44:11.946611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.414 [2024-07-25 10:44:12.025008] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:42.981  Copying: 1024/1024 [kB] (average 500 MBps) 00:05:42.981 00:05:42.981 10:44:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:42.981 10:44:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:05:42.981 10:44:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:05:42.981 10:44:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:05:42.981 10:44:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:05:42.981 10:44:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:42.981 10:44:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:43.563 10:44:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:05:43.563 10:44:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:43.563 10:44:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:43.563 10:44:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:43.563 [2024-07-25 10:44:13.172126] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:43.563 [2024-07-25 10:44:13.172481] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61730 ] 00:05:43.563 { 00:05:43.563 "subsystems": [ 00:05:43.563 { 00:05:43.563 "subsystem": "bdev", 00:05:43.563 "config": [ 00:05:43.563 { 00:05:43.563 "params": { 00:05:43.563 "trtype": "pcie", 00:05:43.563 "traddr": "0000:00:10.0", 00:05:43.563 "name": "Nvme0" 00:05:43.563 }, 00:05:43.563 "method": "bdev_nvme_attach_controller" 00:05:43.563 }, 00:05:43.563 { 00:05:43.563 "method": "bdev_wait_for_examine" 00:05:43.563 } 00:05:43.563 ] 00:05:43.563 } 00:05:43.563 ] 00:05:43.563 } 00:05:43.849 [2024-07-25 10:44:13.312492] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.849 [2024-07-25 10:44:13.463695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.849 [2024-07-25 10:44:13.535955] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:44.365  Copying: 60/60 [kB] (average 58 MBps) 00:05:44.365 00:05:44.365 10:44:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:05:44.365 10:44:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:44.365 10:44:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:44.365 10:44:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:44.365 [2024-07-25 10:44:14.027596] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:44.365 [2024-07-25 10:44:14.027696] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61744 ] 00:05:44.365 { 00:05:44.365 "subsystems": [ 00:05:44.365 { 00:05:44.365 "subsystem": "bdev", 00:05:44.365 "config": [ 00:05:44.365 { 00:05:44.365 "params": { 00:05:44.365 "trtype": "pcie", 00:05:44.365 "traddr": "0000:00:10.0", 00:05:44.365 "name": "Nvme0" 00:05:44.365 }, 00:05:44.365 "method": "bdev_nvme_attach_controller" 00:05:44.365 }, 00:05:44.365 { 00:05:44.365 "method": "bdev_wait_for_examine" 00:05:44.365 } 00:05:44.365 ] 00:05:44.365 } 00:05:44.365 ] 00:05:44.365 } 00:05:44.623 [2024-07-25 10:44:14.165610] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.623 [2024-07-25 10:44:14.312637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.882 [2024-07-25 10:44:14.386862] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:45.141  Copying: 60/60 [kB] (average 29 MBps) 00:05:45.141 00:05:45.141 10:44:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:45.141 10:44:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:05:45.141 10:44:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:45.141 10:44:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:45.141 10:44:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:05:45.141 10:44:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:45.141 10:44:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:45.141 10:44:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:45.141 10:44:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:45.141 10:44:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:45.141 10:44:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:45.141 [2024-07-25 10:44:14.872583] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:45.141 [2024-07-25 10:44:14.872680] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61765 ] 00:05:45.141 { 00:05:45.141 "subsystems": [ 00:05:45.141 { 00:05:45.141 "subsystem": "bdev", 00:05:45.141 "config": [ 00:05:45.141 { 00:05:45.141 "params": { 00:05:45.141 "trtype": "pcie", 00:05:45.141 "traddr": "0000:00:10.0", 00:05:45.141 "name": "Nvme0" 00:05:45.141 }, 00:05:45.141 "method": "bdev_nvme_attach_controller" 00:05:45.141 }, 00:05:45.141 { 00:05:45.141 "method": "bdev_wait_for_examine" 00:05:45.141 } 00:05:45.141 ] 00:05:45.141 } 00:05:45.141 ] 00:05:45.141 } 00:05:45.398 [2024-07-25 10:44:15.007585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.657 [2024-07-25 10:44:15.154110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.657 [2024-07-25 10:44:15.228178] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:45.914  Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:45.914 00:05:45.914 10:44:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:45.914 10:44:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:45.914 10:44:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:05:45.914 10:44:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:05:45.914 10:44:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:05:45.914 10:44:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:05:45.914 10:44:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:45.914 10:44:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:46.479 10:44:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:05:46.479 10:44:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:46.479 10:44:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:46.479 10:44:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:46.737 { 00:05:46.737 "subsystems": [ 00:05:46.737 { 00:05:46.737 "subsystem": "bdev", 00:05:46.737 "config": [ 00:05:46.737 { 00:05:46.737 "params": { 00:05:46.737 "trtype": "pcie", 00:05:46.737 "traddr": "0000:00:10.0", 00:05:46.737 "name": "Nvme0" 00:05:46.737 }, 00:05:46.737 "method": "bdev_nvme_attach_controller" 00:05:46.737 }, 00:05:46.737 { 00:05:46.737 "method": "bdev_wait_for_examine" 00:05:46.738 } 00:05:46.738 ] 00:05:46.738 } 00:05:46.738 ] 00:05:46.738 } 00:05:46.738 [2024-07-25 10:44:16.270688] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:46.738 [2024-07-25 10:44:16.270996] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61789 ] 00:05:46.738 [2024-07-25 10:44:16.407329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.995 [2024-07-25 10:44:16.525204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.995 [2024-07-25 10:44:16.598676] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:47.561  Copying: 56/56 [kB] (average 54 MBps) 00:05:47.561 00:05:47.561 10:44:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:47.561 10:44:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:05:47.561 10:44:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:47.561 10:44:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:47.561 { 00:05:47.561 "subsystems": [ 00:05:47.561 { 00:05:47.561 "subsystem": "bdev", 00:05:47.561 "config": [ 00:05:47.561 { 00:05:47.561 "params": { 00:05:47.561 "trtype": "pcie", 00:05:47.561 "traddr": "0000:00:10.0", 00:05:47.561 "name": "Nvme0" 00:05:47.561 }, 00:05:47.561 "method": "bdev_nvme_attach_controller" 00:05:47.561 }, 00:05:47.561 { 00:05:47.561 "method": "bdev_wait_for_examine" 00:05:47.561 } 00:05:47.561 ] 00:05:47.561 } 00:05:47.561 ] 00:05:47.561 } 00:05:47.561 [2024-07-25 10:44:17.060130] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:47.561 [2024-07-25 10:44:17.060233] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61803 ] 00:05:47.561 [2024-07-25 10:44:17.200349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.820 [2024-07-25 10:44:17.312156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.820 [2024-07-25 10:44:17.388934] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:48.121  Copying: 56/56 [kB] (average 27 MBps) 00:05:48.121 00:05:48.121 10:44:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:48.121 10:44:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:05:48.121 10:44:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:48.121 10:44:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:48.122 10:44:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:05:48.122 10:44:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:48.122 10:44:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:48.122 10:44:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:48.122 10:44:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:48.122 10:44:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:48.122 10:44:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:48.379 [2024-07-25 10:44:17.873138] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:48.379 [2024-07-25 10:44:17.873241] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61824 ] 00:05:48.379 { 00:05:48.379 "subsystems": [ 00:05:48.379 { 00:05:48.379 "subsystem": "bdev", 00:05:48.379 "config": [ 00:05:48.379 { 00:05:48.379 "params": { 00:05:48.379 "trtype": "pcie", 00:05:48.379 "traddr": "0000:00:10.0", 00:05:48.379 "name": "Nvme0" 00:05:48.379 }, 00:05:48.379 "method": "bdev_nvme_attach_controller" 00:05:48.379 }, 00:05:48.379 { 00:05:48.380 "method": "bdev_wait_for_examine" 00:05:48.380 } 00:05:48.380 ] 00:05:48.380 } 00:05:48.380 ] 00:05:48.380 } 00:05:48.380 [2024-07-25 10:44:18.011828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.638 [2024-07-25 10:44:18.146895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.638 [2024-07-25 10:44:18.225414] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:49.203  Copying: 1024/1024 [kB] (average 500 MBps) 00:05:49.203 00:05:49.203 10:44:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:49.203 10:44:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:05:49.203 10:44:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:05:49.203 10:44:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:05:49.203 10:44:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:05:49.203 10:44:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:49.203 10:44:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:49.769 10:44:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:05:49.769 10:44:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:49.769 10:44:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:49.769 10:44:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:49.769 [2024-07-25 10:44:19.278926] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:49.769 [2024-07-25 10:44:19.279449] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61843 ] 00:05:49.769 { 00:05:49.769 "subsystems": [ 00:05:49.769 { 00:05:49.769 "subsystem": "bdev", 00:05:49.769 "config": [ 00:05:49.769 { 00:05:49.769 "params": { 00:05:49.769 "trtype": "pcie", 00:05:49.769 "traddr": "0000:00:10.0", 00:05:49.769 "name": "Nvme0" 00:05:49.769 }, 00:05:49.769 "method": "bdev_nvme_attach_controller" 00:05:49.769 }, 00:05:49.769 { 00:05:49.769 "method": "bdev_wait_for_examine" 00:05:49.769 } 00:05:49.769 ] 00:05:49.769 } 00:05:49.769 ] 00:05:49.769 } 00:05:49.769 [2024-07-25 10:44:19.415529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.027 [2024-07-25 10:44:19.557879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.027 [2024-07-25 10:44:19.629897] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:50.594  Copying: 56/56 [kB] (average 54 MBps) 00:05:50.594 00:05:50.594 10:44:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:05:50.594 10:44:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:50.594 10:44:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:50.594 10:44:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:50.594 [2024-07-25 10:44:20.100618] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:50.594 [2024-07-25 10:44:20.100710] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61862 ] 00:05:50.594 { 00:05:50.594 "subsystems": [ 00:05:50.594 { 00:05:50.594 "subsystem": "bdev", 00:05:50.594 "config": [ 00:05:50.594 { 00:05:50.594 "params": { 00:05:50.594 "trtype": "pcie", 00:05:50.594 "traddr": "0000:00:10.0", 00:05:50.594 "name": "Nvme0" 00:05:50.594 }, 00:05:50.594 "method": "bdev_nvme_attach_controller" 00:05:50.594 }, 00:05:50.594 { 00:05:50.595 "method": "bdev_wait_for_examine" 00:05:50.595 } 00:05:50.595 ] 00:05:50.595 } 00:05:50.595 ] 00:05:50.595 } 00:05:50.595 [2024-07-25 10:44:20.235927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.852 [2024-07-25 10:44:20.387589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.852 [2024-07-25 10:44:20.464718] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:51.419  Copying: 56/56 [kB] (average 54 MBps) 00:05:51.419 00:05:51.419 10:44:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:51.419 10:44:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:05:51.419 10:44:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:51.419 10:44:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:51.419 10:44:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:05:51.419 10:44:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:51.419 10:44:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:51.419 10:44:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:51.419 10:44:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:51.419 10:44:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:51.419 10:44:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:51.419 [2024-07-25 10:44:20.932174] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:51.419 [2024-07-25 10:44:20.932265] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61883 ] 00:05:51.419 { 00:05:51.419 "subsystems": [ 00:05:51.419 { 00:05:51.419 "subsystem": "bdev", 00:05:51.419 "config": [ 00:05:51.419 { 00:05:51.419 "params": { 00:05:51.419 "trtype": "pcie", 00:05:51.419 "traddr": "0000:00:10.0", 00:05:51.419 "name": "Nvme0" 00:05:51.419 }, 00:05:51.419 "method": "bdev_nvme_attach_controller" 00:05:51.419 }, 00:05:51.420 { 00:05:51.420 "method": "bdev_wait_for_examine" 00:05:51.420 } 00:05:51.420 ] 00:05:51.420 } 00:05:51.420 ] 00:05:51.420 } 00:05:51.420 [2024-07-25 10:44:21.064408] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.678 [2024-07-25 10:44:21.190368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.678 [2024-07-25 10:44:21.266855] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:52.244  Copying: 1024/1024 [kB] (average 500 MBps) 00:05:52.244 00:05:52.244 10:44:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:52.244 10:44:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:52.244 10:44:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:05:52.244 10:44:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:05:52.244 10:44:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:05:52.244 10:44:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:05:52.244 10:44:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:52.244 10:44:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:52.502 10:44:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:05:52.502 10:44:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:52.502 10:44:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:52.502 10:44:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:52.502 [2024-07-25 10:44:22.181123] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:52.502 [2024-07-25 10:44:22.181233] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61904 ] 00:05:52.502 { 00:05:52.502 "subsystems": [ 00:05:52.502 { 00:05:52.502 "subsystem": "bdev", 00:05:52.502 "config": [ 00:05:52.502 { 00:05:52.502 "params": { 00:05:52.502 "trtype": "pcie", 00:05:52.502 "traddr": "0000:00:10.0", 00:05:52.502 "name": "Nvme0" 00:05:52.502 }, 00:05:52.502 "method": "bdev_nvme_attach_controller" 00:05:52.502 }, 00:05:52.502 { 00:05:52.502 "method": "bdev_wait_for_examine" 00:05:52.502 } 00:05:52.502 ] 00:05:52.502 } 00:05:52.502 ] 00:05:52.502 } 00:05:52.760 [2024-07-25 10:44:22.320077] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.760 [2024-07-25 10:44:22.446187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.053 [2024-07-25 10:44:22.521043] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:53.312  Copying: 48/48 [kB] (average 46 MBps) 00:05:53.312 00:05:53.312 10:44:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:53.312 10:44:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:05:53.312 10:44:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:53.312 10:44:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:53.312 { 00:05:53.312 "subsystems": [ 00:05:53.312 { 00:05:53.312 "subsystem": "bdev", 00:05:53.312 "config": [ 00:05:53.312 { 00:05:53.312 "params": { 00:05:53.312 "trtype": "pcie", 00:05:53.312 "traddr": "0000:00:10.0", 00:05:53.312 "name": "Nvme0" 00:05:53.312 }, 00:05:53.312 "method": "bdev_nvme_attach_controller" 00:05:53.312 }, 00:05:53.312 { 00:05:53.312 "method": "bdev_wait_for_examine" 00:05:53.312 } 00:05:53.312 ] 00:05:53.312 } 00:05:53.312 ] 00:05:53.312 } 00:05:53.312 [2024-07-25 10:44:22.983033] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:53.312 [2024-07-25 10:44:22.983142] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61923 ] 00:05:53.570 [2024-07-25 10:44:23.120413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.570 [2024-07-25 10:44:23.245252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.836 [2024-07-25 10:44:23.321290] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:54.095  Copying: 48/48 [kB] (average 23 MBps) 00:05:54.095 00:05:54.095 10:44:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:54.095 10:44:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:05:54.096 10:44:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:54.096 10:44:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:54.096 10:44:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:05:54.096 10:44:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:54.096 10:44:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:54.096 10:44:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:54.096 10:44:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:54.096 10:44:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:54.096 10:44:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:54.096 [2024-07-25 10:44:23.781119] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:54.096 [2024-07-25 10:44:23.781216] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61944 ] 00:05:54.096 { 00:05:54.096 "subsystems": [ 00:05:54.096 { 00:05:54.096 "subsystem": "bdev", 00:05:54.096 "config": [ 00:05:54.096 { 00:05:54.096 "params": { 00:05:54.096 "trtype": "pcie", 00:05:54.096 "traddr": "0000:00:10.0", 00:05:54.096 "name": "Nvme0" 00:05:54.096 }, 00:05:54.096 "method": "bdev_nvme_attach_controller" 00:05:54.096 }, 00:05:54.096 { 00:05:54.096 "method": "bdev_wait_for_examine" 00:05:54.096 } 00:05:54.096 ] 00:05:54.096 } 00:05:54.096 ] 00:05:54.096 } 00:05:54.354 [2024-07-25 10:44:23.914291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.354 [2024-07-25 10:44:24.027754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.613 [2024-07-25 10:44:24.104452] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:54.872  Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:54.872 00:05:54.872 10:44:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:54.872 10:44:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:05:54.872 10:44:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:05:54.872 10:44:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:05:54.872 10:44:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:05:54.872 10:44:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:54.872 10:44:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:55.438 10:44:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:05:55.438 10:44:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:55.438 10:44:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:55.438 10:44:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:55.438 [2024-07-25 10:44:25.037153] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:55.438 [2024-07-25 10:44:25.037270] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61963 ] 00:05:55.438 { 00:05:55.438 "subsystems": [ 00:05:55.438 { 00:05:55.438 "subsystem": "bdev", 00:05:55.439 "config": [ 00:05:55.439 { 00:05:55.439 "params": { 00:05:55.439 "trtype": "pcie", 00:05:55.439 "traddr": "0000:00:10.0", 00:05:55.439 "name": "Nvme0" 00:05:55.439 }, 00:05:55.439 "method": "bdev_nvme_attach_controller" 00:05:55.439 }, 00:05:55.439 { 00:05:55.439 "method": "bdev_wait_for_examine" 00:05:55.439 } 00:05:55.439 ] 00:05:55.439 } 00:05:55.439 ] 00:05:55.439 } 00:05:55.696 [2024-07-25 10:44:25.176963] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.696 [2024-07-25 10:44:25.305279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.696 [2024-07-25 10:44:25.381625] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:56.213  Copying: 48/48 [kB] (average 46 MBps) 00:05:56.213 00:05:56.213 10:44:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:05:56.213 10:44:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:56.213 10:44:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:56.213 10:44:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:56.213 { 00:05:56.213 "subsystems": [ 00:05:56.213 { 00:05:56.213 "subsystem": "bdev", 00:05:56.213 "config": [ 00:05:56.213 { 00:05:56.213 "params": { 00:05:56.213 "trtype": "pcie", 00:05:56.213 "traddr": "0000:00:10.0", 00:05:56.213 "name": "Nvme0" 00:05:56.213 }, 00:05:56.213 "method": "bdev_nvme_attach_controller" 00:05:56.213 }, 00:05:56.213 { 00:05:56.213 "method": "bdev_wait_for_examine" 00:05:56.213 } 00:05:56.213 ] 00:05:56.213 } 00:05:56.213 ] 00:05:56.213 } 00:05:56.213 [2024-07-25 10:44:25.862904] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:56.213 [2024-07-25 10:44:25.863013] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61982 ] 00:05:56.473 [2024-07-25 10:44:25.999993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.473 [2024-07-25 10:44:26.149970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.732 [2024-07-25 10:44:26.230812] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:56.991  Copying: 48/48 [kB] (average 46 MBps) 00:05:56.991 00:05:56.991 10:44:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:56.991 10:44:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:05:56.991 10:44:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:56.991 10:44:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:56.991 10:44:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:05:56.991 10:44:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:56.991 10:44:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:56.991 10:44:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:56.991 10:44:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:56.991 10:44:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:56.991 10:44:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:57.250 { 00:05:57.250 "subsystems": [ 00:05:57.250 { 00:05:57.250 "subsystem": "bdev", 00:05:57.250 "config": [ 00:05:57.250 { 00:05:57.250 "params": { 00:05:57.250 "trtype": "pcie", 00:05:57.250 "traddr": "0000:00:10.0", 00:05:57.250 "name": "Nvme0" 00:05:57.250 }, 00:05:57.250 "method": "bdev_nvme_attach_controller" 00:05:57.250 }, 00:05:57.250 { 00:05:57.250 "method": "bdev_wait_for_examine" 00:05:57.250 } 00:05:57.250 ] 00:05:57.250 } 00:05:57.250 ] 00:05:57.250 } 00:05:57.250 [2024-07-25 10:44:26.756804] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:57.250 [2024-07-25 10:44:26.757489] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62004 ] 00:05:57.250 [2024-07-25 10:44:26.895899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.509 [2024-07-25 10:44:27.050284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.509 [2024-07-25 10:44:27.129969] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:58.030  Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:58.030 00:05:58.030 ************************************ 00:05:58.030 END TEST dd_rw 00:05:58.030 ************************************ 00:05:58.030 00:05:58.030 real 0m18.231s 00:05:58.030 user 0m13.387s 00:05:58.030 sys 0m7.201s 00:05:58.030 10:44:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.030 10:44:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:58.030 10:44:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:05:58.030 10:44:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.030 10:44:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.030 10:44:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:58.030 ************************************ 00:05:58.030 START TEST dd_rw_offset 00:05:58.030 ************************************ 00:05:58.030 10:44:27 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1125 -- # basic_offset 00:05:58.030 10:44:27 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:05:58.030 10:44:27 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:05:58.030 10:44:27 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:05:58.030 10:44:27 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:58.030 10:44:27 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:05:58.030 10:44:27 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=lwnzdka5gmyqnyglnk037q52m7tewq8rhe4flephrcxh7n119n7qpmyav1bazwqsfye7wrd2fk422tjfzsfz2w6mbwje11q0zt0vxr7ftowgjvyaot20pf3hkcqi5ij0lhaijsf3pgj999geqkiygw50540pnliywks5no7t4foizhv81lbgdkiaausl2fe86k85d8p34hbrwhcfhk3v6fii46rax0u6r1n70dodzorizwdc0fbq26878fdhm1hymwsnl7w1o3uyjv9fnulx9mkvo2dxzy4hz8gt43jt1x6kdcemo74n4js8efgxtekbtgyro4kq247xf81w3ffuvvs0k23nnmi0lxt5lpgefb094lps1rgbabhx9q0k1krfvc6fmtg4pndmarmgplfs3b5fm6sp29hvwvdiylaqemrvq03n6ydzp2ndopb2sbqcxhr398ue9kv11gypebowe049727t965dihah1cc2oi0d3zzo8h3eh5qzg0qswm6t7uujijq9il34bsaxqtqmwvym15j3eelp73ydpxuuxb1ltq6l7n59542toono3kmn0rmmffqwa18q7pn6r9pxxiebafggqpylr9egebdmtkzcktwze28auftdz2fbp6tgp74jljp0pdd9sqwyag6sglupa3jsmsfjlsimjku1mjyvt9if36qtav4ihcejsx5kckg77uzr5i5ohr7cps9en2u1dlfx3r2au0gagbjr3wuc5e4f1bumc8xrg2k8dx5is2b1cebb8dm3nexu69js053wr8078cw354x1nnaxjlndbbjckd6nueys9cxhlnsw2dmwkfdm85o190okqqf8v0vp3e4bypxifs54y9zcg6ydl7ezbffjbd0hx60b4y49aefz5uf1j8nczjmk0fv50lpmxj5rwnoa8yw8lpqpiitckhnka14l7sphhee72iljsr632dvtaqzekvpzmbu9izjn4lp1kqmjrjiilm086hju895tfkvb8156lncbcg4y79pblh1vnx8tlywc1b63mtpzd28d2eha0o6bwag9ek1sj6dd09ur7eoy5508duwvkkyhk9igigcuoudpzepwqm2ymz953592ty116e60qdt1o0hma5b9yf6j32ve94np6mzlnjurgy6ziyu2obbbwf56o8yn99rxql9qwhol95atufu47jrfr6f1shdgzxojqupzpf5ap5ed3fmogcmkz60mcey8qabruaip2czsnrphc6bdgv15pjnwxvlguskix446y89rzwtjenspgplq8u2r0sd0jd1y2jfmc1z0ufufjtndqygryv02hno6myf1mc70bylqmyz7ibx2xphbzkx9w5go11t69pq7jmdam348giqkb64rpu2k0i8giaupwfip3uvvrhbnpb74rhynt5eeqo4o630n8ilog6v16k56ajunsr1rb7vn8o3zu7eg54nbs0r7f575cod0s55wqfavyji3una7yj7f5xepi5vtfc1paxk114dtarkwzjoo4xcpg8tq3m7qn5ufalbmvxajhg9xdg2vwspmg5qha1d36ssmzofpzpe7sl376aaoq43z6koycvkb5qofc3wu4zv81uxw5nyfoefi8z932u9o86yue7jy4tj3ksb0gpark4x88j3nurjmhf8egbsdxp66oxytghespgeme7gm6jijsr042jknf3q201uuees8xdeea8qtkf55na2v2gfbbffl8e5kyf7if8l9t5i541pjg7067fjdkfg2t7pd5ll386emp7wb9s40uf5r0sqn6s5zk1wv6914aijo5a0d7if3xllpttxc6bm9xnadu142o13a48t9seowr4bovnz03f9ffgvgvtj89ks3791v2vr4faffijjvw38c7nr22yxvld472f74rtsgmmxdya99mtea0fizy8ha3rdtsygey1qm3rpwn4ivfthewx23dow62ggrgux6dqeidl29fwwd3f4yx43k2admst2shpj6yydvy8wsnemaey9hjbbbq5j98ulgfv6s9b1czlatttuj5cq1msjo099z9af2i2v4w7yo2qejwe06ulqrl5o4ftrfecu43r0mueojyzeb4z0lalu9ikkjbkrh1qzk01blbimacf2u80cilzbk0ofmoib0kao2fa61j86c14mct1zd8xmpgoku8uyikigky2ex0e09aq1egxiyzc9pt97uqcnk774ujv6lpcmqbklf9typ8ejt4jpqrkp7d8sq1eg2bn77v718z4e0nwcm6u6zoyp1zp9lsgkkk64z395zl3h32611z5yl29neiptmn54nrfv3e09fawr3t3d07yzqq049o9gxptwzx5z2evghu5sf2o1dayvufmzooayewuom1bwtdslkdr6p3wszq7e75p5roy3ju5wdc37ksgfh7wc2604o6mao03hbk0m6o82jd43gaihuvzo5cxozcglijlrnycj480kpccy1gbm6o8y2m9qntqspasaqa784m23sckfqtevmsoyi7yf7do84ss5vxo1zg8f6k4qvjcotogktgc8fie8advnz8jl2w8ny1wo5d1k1qzvjyhpyyoy4g8433j6wfr38cfrw72fhvn453b9rukuit6ogxf2fh6dxb6l81pgzqejslqz9a11m6rjmhqi9n2oqow7ig4sajrbc3mp3r7rcevv2a53simzp28dy3nl4qdzfm8u1nkcmpjhhn5k29fw4p9d167bbcecq6lqxz7y6rx1q9v7njx90r33ifasxjp3hqglqrhpfprjnvhdnldhmb5folooxwuysguhfcbuutlwm56k9l4zum8nniw0tod1x7434yzy9vp1se21ls708y56sb5kwg94hxpcy8aqah4cgpfv9upkqirzsf7t8r9q8vvl8tlgb03qzfs5sdtepma1gr68ivb8rl8c2it76ot19rb0lx9xaanrtlo4j2wt0outbec5j5ksmzp1c9vry2tdc2xtaqyrpyj1ekcksuzb3qer2i5j48syzmpb44vy400ocw6xkxubrwl8n7dgg9egutrauue13bk1ox35s4rc51z2mv7mrshwyn6iagzxwfe5d4jtwpr89bp8ouahnx0y0ul3iiukbflyranl8jqnbmxsrl1uf91jq46gkzh8wmgq9oopazvdig0x222wt3xjaih3cl3z9j1qqan1mmaleyrsqrp2nm66z801stmvbg6sajpkw2khljet13agcc3910no7fu06m9ux7w5kzj74nwsqyz9bjtmh6ksghci22rxyvcgybb67viwu6hcajc0pms3t2eb22c7lcijdpr8r26f47mx2jhheov64ylijn5f8nh8q658xkx3r88p76axxi2dq750yh1jsz3x9hdm6wkow6cfr2mgk28bnqa9t75paamhm5l63ceyajbujj3assgihjpwo8dxzywzmqmp8zetwd8ntgjhauj74y6q585cc6ly0ruaisureuxwhr3udkuriaxynjods9pkgeh35ne9y911ap9hsxccjq652yqihkxwjvifvnzbmx8s3pggb05cpy6oqb25kc1f3bif72hmvmt9a40su9ni7ppu1fndtzat9mfhsp0exky8zlrz09o9bhfneljh2lyg1bzyx81qw0w9g0sw23t3ojaey4ov2sn335hrn0p1sk08ci55415k9qwleuh4sk1d0lk9a8j4pbaoe81f9tkbe4h3ytwjgvk50xtbtd9gurhbodnmzouf90649jlu8q6gpwyih9a6od34qgrvw1cnp1xfkors2h6y3v7jsmn8eqv8718unumjodmw2ewovf25c18la8bm9cnzlldriwr8vxmro09i9g6ca0i256pfmhzn0n7h3uyldymcwg0hk25usjudj64ls3e8m491cqbfdd8xp06uygg1vxmw5152479z2lsy8pqqrte13dliu2pnzcqnp7rv8hlnjxactrwbufnkw6bcih5g9q00tq5fe047sxgwtrt3yqkxuhfyr444oa78no4imfna8n0rcgnf9wptgikdr5638td4f 00:05:58.030 10:44:27 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:05:58.030 10:44:27 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:05:58.030 10:44:27 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:05:58.030 10:44:27 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:58.030 [2024-07-25 10:44:27.716833] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:58.030 [2024-07-25 10:44:27.716966] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62039 ] 00:05:58.030 { 00:05:58.030 "subsystems": [ 00:05:58.030 { 00:05:58.030 "subsystem": "bdev", 00:05:58.030 "config": [ 00:05:58.030 { 00:05:58.030 "params": { 00:05:58.030 "trtype": "pcie", 00:05:58.030 "traddr": "0000:00:10.0", 00:05:58.030 "name": "Nvme0" 00:05:58.030 }, 00:05:58.031 "method": "bdev_nvme_attach_controller" 00:05:58.031 }, 00:05:58.031 { 00:05:58.031 "method": "bdev_wait_for_examine" 00:05:58.031 } 00:05:58.031 ] 00:05:58.031 } 00:05:58.031 ] 00:05:58.031 } 00:05:58.289 [2024-07-25 10:44:27.855948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.289 [2024-07-25 10:44:27.996384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.548 [2024-07-25 10:44:28.071862] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:58.806  Copying: 4096/4096 [B] (average 4000 kBps) 00:05:58.807 00:05:58.807 10:44:28 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:05:58.807 10:44:28 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:05:58.807 10:44:28 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:05:58.807 10:44:28 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:59.065 [2024-07-25 10:44:28.545107] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:59.065 [2024-07-25 10:44:28.545211] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62048 ] 00:05:59.065 { 00:05:59.065 "subsystems": [ 00:05:59.065 { 00:05:59.065 "subsystem": "bdev", 00:05:59.065 "config": [ 00:05:59.065 { 00:05:59.065 "params": { 00:05:59.065 "trtype": "pcie", 00:05:59.065 "traddr": "0000:00:10.0", 00:05:59.065 "name": "Nvme0" 00:05:59.065 }, 00:05:59.065 "method": "bdev_nvme_attach_controller" 00:05:59.065 }, 00:05:59.065 { 00:05:59.065 "method": "bdev_wait_for_examine" 00:05:59.065 } 00:05:59.065 ] 00:05:59.065 } 00:05:59.065 ] 00:05:59.065 } 00:05:59.065 [2024-07-25 10:44:28.678933] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.324 [2024-07-25 10:44:28.825338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.324 [2024-07-25 10:44:28.905264] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:59.892  Copying: 4096/4096 [B] (average 4000 kBps) 00:05:59.892 00:05:59.892 10:44:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:05:59.893 10:44:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ lwnzdka5gmyqnyglnk037q52m7tewq8rhe4flephrcxh7n119n7qpmyav1bazwqsfye7wrd2fk422tjfzsfz2w6mbwje11q0zt0vxr7ftowgjvyaot20pf3hkcqi5ij0lhaijsf3pgj999geqkiygw50540pnliywks5no7t4foizhv81lbgdkiaausl2fe86k85d8p34hbrwhcfhk3v6fii46rax0u6r1n70dodzorizwdc0fbq26878fdhm1hymwsnl7w1o3uyjv9fnulx9mkvo2dxzy4hz8gt43jt1x6kdcemo74n4js8efgxtekbtgyro4kq247xf81w3ffuvvs0k23nnmi0lxt5lpgefb094lps1rgbabhx9q0k1krfvc6fmtg4pndmarmgplfs3b5fm6sp29hvwvdiylaqemrvq03n6ydzp2ndopb2sbqcxhr398ue9kv11gypebowe049727t965dihah1cc2oi0d3zzo8h3eh5qzg0qswm6t7uujijq9il34bsaxqtqmwvym15j3eelp73ydpxuuxb1ltq6l7n59542toono3kmn0rmmffqwa18q7pn6r9pxxiebafggqpylr9egebdmtkzcktwze28auftdz2fbp6tgp74jljp0pdd9sqwyag6sglupa3jsmsfjlsimjku1mjyvt9if36qtav4ihcejsx5kckg77uzr5i5ohr7cps9en2u1dlfx3r2au0gagbjr3wuc5e4f1bumc8xrg2k8dx5is2b1cebb8dm3nexu69js053wr8078cw354x1nnaxjlndbbjckd6nueys9cxhlnsw2dmwkfdm85o190okqqf8v0vp3e4bypxifs54y9zcg6ydl7ezbffjbd0hx60b4y49aefz5uf1j8nczjmk0fv50lpmxj5rwnoa8yw8lpqpiitckhnka14l7sphhee72iljsr632dvtaqzekvpzmbu9izjn4lp1kqmjrjiilm086hju895tfkvb8156lncbcg4y79pblh1vnx8tlywc1b63mtpzd28d2eha0o6bwag9ek1sj6dd09ur7eoy5508duwvkkyhk9igigcuoudpzepwqm2ymz953592ty116e60qdt1o0hma5b9yf6j32ve94np6mzlnjurgy6ziyu2obbbwf56o8yn99rxql9qwhol95atufu47jrfr6f1shdgzxojqupzpf5ap5ed3fmogcmkz60mcey8qabruaip2czsnrphc6bdgv15pjnwxvlguskix446y89rzwtjenspgplq8u2r0sd0jd1y2jfmc1z0ufufjtndqygryv02hno6myf1mc70bylqmyz7ibx2xphbzkx9w5go11t69pq7jmdam348giqkb64rpu2k0i8giaupwfip3uvvrhbnpb74rhynt5eeqo4o630n8ilog6v16k56ajunsr1rb7vn8o3zu7eg54nbs0r7f575cod0s55wqfavyji3una7yj7f5xepi5vtfc1paxk114dtarkwzjoo4xcpg8tq3m7qn5ufalbmvxajhg9xdg2vwspmg5qha1d36ssmzofpzpe7sl376aaoq43z6koycvkb5qofc3wu4zv81uxw5nyfoefi8z932u9o86yue7jy4tj3ksb0gpark4x88j3nurjmhf8egbsdxp66oxytghespgeme7gm6jijsr042jknf3q201uuees8xdeea8qtkf55na2v2gfbbffl8e5kyf7if8l9t5i541pjg7067fjdkfg2t7pd5ll386emp7wb9s40uf5r0sqn6s5zk1wv6914aijo5a0d7if3xllpttxc6bm9xnadu142o13a48t9seowr4bovnz03f9ffgvgvtj89ks3791v2vr4faffijjvw38c7nr22yxvld472f74rtsgmmxdya99mtea0fizy8ha3rdtsygey1qm3rpwn4ivfthewx23dow62ggrgux6dqeidl29fwwd3f4yx43k2admst2shpj6yydvy8wsnemaey9hjbbbq5j98ulgfv6s9b1czlatttuj5cq1msjo099z9af2i2v4w7yo2qejwe06ulqrl5o4ftrfecu43r0mueojyzeb4z0lalu9ikkjbkrh1qzk01blbimacf2u80cilzbk0ofmoib0kao2fa61j86c14mct1zd8xmpgoku8uyikigky2ex0e09aq1egxiyzc9pt97uqcnk774ujv6lpcmqbklf9typ8ejt4jpqrkp7d8sq1eg2bn77v718z4e0nwcm6u6zoyp1zp9lsgkkk64z395zl3h32611z5yl29neiptmn54nrfv3e09fawr3t3d07yzqq049o9gxptwzx5z2evghu5sf2o1dayvufmzooayewuom1bwtdslkdr6p3wszq7e75p5roy3ju5wdc37ksgfh7wc2604o6mao03hbk0m6o82jd43gaihuvzo5cxozcglijlrnycj480kpccy1gbm6o8y2m9qntqspasaqa784m23sckfqtevmsoyi7yf7do84ss5vxo1zg8f6k4qvjcotogktgc8fie8advnz8jl2w8ny1wo5d1k1qzvjyhpyyoy4g8433j6wfr38cfrw72fhvn453b9rukuit6ogxf2fh6dxb6l81pgzqejslqz9a11m6rjmhqi9n2oqow7ig4sajrbc3mp3r7rcevv2a53simzp28dy3nl4qdzfm8u1nkcmpjhhn5k29fw4p9d167bbcecq6lqxz7y6rx1q9v7njx90r33ifasxjp3hqglqrhpfprjnvhdnldhmb5folooxwuysguhfcbuutlwm56k9l4zum8nniw0tod1x7434yzy9vp1se21ls708y56sb5kwg94hxpcy8aqah4cgpfv9upkqirzsf7t8r9q8vvl8tlgb03qzfs5sdtepma1gr68ivb8rl8c2it76ot19rb0lx9xaanrtlo4j2wt0outbec5j5ksmzp1c9vry2tdc2xtaqyrpyj1ekcksuzb3qer2i5j48syzmpb44vy400ocw6xkxubrwl8n7dgg9egutrauue13bk1ox35s4rc51z2mv7mrshwyn6iagzxwfe5d4jtwpr89bp8ouahnx0y0ul3iiukbflyranl8jqnbmxsrl1uf91jq46gkzh8wmgq9oopazvdig0x222wt3xjaih3cl3z9j1qqan1mmaleyrsqrp2nm66z801stmvbg6sajpkw2khljet13agcc3910no7fu06m9ux7w5kzj74nwsqyz9bjtmh6ksghci22rxyvcgybb67viwu6hcajc0pms3t2eb22c7lcijdpr8r26f47mx2jhheov64ylijn5f8nh8q658xkx3r88p76axxi2dq750yh1jsz3x9hdm6wkow6cfr2mgk28bnqa9t75paamhm5l63ceyajbujj3assgihjpwo8dxzywzmqmp8zetwd8ntgjhauj74y6q585cc6ly0ruaisureuxwhr3udkuriaxynjods9pkgeh35ne9y911ap9hsxccjq652yqihkxwjvifvnzbmx8s3pggb05cpy6oqb25kc1f3bif72hmvmt9a40su9ni7ppu1fndtzat9mfhsp0exky8zlrz09o9bhfneljh2lyg1bzyx81qw0w9g0sw23t3ojaey4ov2sn335hrn0p1sk08ci55415k9qwleuh4sk1d0lk9a8j4pbaoe81f9tkbe4h3ytwjgvk50xtbtd9gurhbodnmzouf90649jlu8q6gpwyih9a6od34qgrvw1cnp1xfkors2h6y3v7jsmn8eqv8718unumjodmw2ewovf25c18la8bm9cnzlldriwr8vxmro09i9g6ca0i256pfmhzn0n7h3uyldymcwg0hk25usjudj64ls3e8m491cqbfdd8xp06uygg1vxmw5152479z2lsy8pqqrte13dliu2pnzcqnp7rv8hlnjxactrwbufnkw6bcih5g9q00tq5fe047sxgwtrt3yqkxuhfyr444oa78no4imfna8n0rcgnf9wptgikdr5638td4f == \l\w\n\z\d\k\a\5\g\m\y\q\n\y\g\l\n\k\0\3\7\q\5\2\m\7\t\e\w\q\8\r\h\e\4\f\l\e\p\h\r\c\x\h\7\n\1\1\9\n\7\q\p\m\y\a\v\1\b\a\z\w\q\s\f\y\e\7\w\r\d\2\f\k\4\2\2\t\j\f\z\s\f\z\2\w\6\m\b\w\j\e\1\1\q\0\z\t\0\v\x\r\7\f\t\o\w\g\j\v\y\a\o\t\2\0\p\f\3\h\k\c\q\i\5\i\j\0\l\h\a\i\j\s\f\3\p\g\j\9\9\9\g\e\q\k\i\y\g\w\5\0\5\4\0\p\n\l\i\y\w\k\s\5\n\o\7\t\4\f\o\i\z\h\v\8\1\l\b\g\d\k\i\a\a\u\s\l\2\f\e\8\6\k\8\5\d\8\p\3\4\h\b\r\w\h\c\f\h\k\3\v\6\f\i\i\4\6\r\a\x\0\u\6\r\1\n\7\0\d\o\d\z\o\r\i\z\w\d\c\0\f\b\q\2\6\8\7\8\f\d\h\m\1\h\y\m\w\s\n\l\7\w\1\o\3\u\y\j\v\9\f\n\u\l\x\9\m\k\v\o\2\d\x\z\y\4\h\z\8\g\t\4\3\j\t\1\x\6\k\d\c\e\m\o\7\4\n\4\j\s\8\e\f\g\x\t\e\k\b\t\g\y\r\o\4\k\q\2\4\7\x\f\8\1\w\3\f\f\u\v\v\s\0\k\2\3\n\n\m\i\0\l\x\t\5\l\p\g\e\f\b\0\9\4\l\p\s\1\r\g\b\a\b\h\x\9\q\0\k\1\k\r\f\v\c\6\f\m\t\g\4\p\n\d\m\a\r\m\g\p\l\f\s\3\b\5\f\m\6\s\p\2\9\h\v\w\v\d\i\y\l\a\q\e\m\r\v\q\0\3\n\6\y\d\z\p\2\n\d\o\p\b\2\s\b\q\c\x\h\r\3\9\8\u\e\9\k\v\1\1\g\y\p\e\b\o\w\e\0\4\9\7\2\7\t\9\6\5\d\i\h\a\h\1\c\c\2\o\i\0\d\3\z\z\o\8\h\3\e\h\5\q\z\g\0\q\s\w\m\6\t\7\u\u\j\i\j\q\9\i\l\3\4\b\s\a\x\q\t\q\m\w\v\y\m\1\5\j\3\e\e\l\p\7\3\y\d\p\x\u\u\x\b\1\l\t\q\6\l\7\n\5\9\5\4\2\t\o\o\n\o\3\k\m\n\0\r\m\m\f\f\q\w\a\1\8\q\7\p\n\6\r\9\p\x\x\i\e\b\a\f\g\g\q\p\y\l\r\9\e\g\e\b\d\m\t\k\z\c\k\t\w\z\e\2\8\a\u\f\t\d\z\2\f\b\p\6\t\g\p\7\4\j\l\j\p\0\p\d\d\9\s\q\w\y\a\g\6\s\g\l\u\p\a\3\j\s\m\s\f\j\l\s\i\m\j\k\u\1\m\j\y\v\t\9\i\f\3\6\q\t\a\v\4\i\h\c\e\j\s\x\5\k\c\k\g\7\7\u\z\r\5\i\5\o\h\r\7\c\p\s\9\e\n\2\u\1\d\l\f\x\3\r\2\a\u\0\g\a\g\b\j\r\3\w\u\c\5\e\4\f\1\b\u\m\c\8\x\r\g\2\k\8\d\x\5\i\s\2\b\1\c\e\b\b\8\d\m\3\n\e\x\u\6\9\j\s\0\5\3\w\r\8\0\7\8\c\w\3\5\4\x\1\n\n\a\x\j\l\n\d\b\b\j\c\k\d\6\n\u\e\y\s\9\c\x\h\l\n\s\w\2\d\m\w\k\f\d\m\8\5\o\1\9\0\o\k\q\q\f\8\v\0\v\p\3\e\4\b\y\p\x\i\f\s\5\4\y\9\z\c\g\6\y\d\l\7\e\z\b\f\f\j\b\d\0\h\x\6\0\b\4\y\4\9\a\e\f\z\5\u\f\1\j\8\n\c\z\j\m\k\0\f\v\5\0\l\p\m\x\j\5\r\w\n\o\a\8\y\w\8\l\p\q\p\i\i\t\c\k\h\n\k\a\1\4\l\7\s\p\h\h\e\e\7\2\i\l\j\s\r\6\3\2\d\v\t\a\q\z\e\k\v\p\z\m\b\u\9\i\z\j\n\4\l\p\1\k\q\m\j\r\j\i\i\l\m\0\8\6\h\j\u\8\9\5\t\f\k\v\b\8\1\5\6\l\n\c\b\c\g\4\y\7\9\p\b\l\h\1\v\n\x\8\t\l\y\w\c\1\b\6\3\m\t\p\z\d\2\8\d\2\e\h\a\0\o\6\b\w\a\g\9\e\k\1\s\j\6\d\d\0\9\u\r\7\e\o\y\5\5\0\8\d\u\w\v\k\k\y\h\k\9\i\g\i\g\c\u\o\u\d\p\z\e\p\w\q\m\2\y\m\z\9\5\3\5\9\2\t\y\1\1\6\e\6\0\q\d\t\1\o\0\h\m\a\5\b\9\y\f\6\j\3\2\v\e\9\4\n\p\6\m\z\l\n\j\u\r\g\y\6\z\i\y\u\2\o\b\b\b\w\f\5\6\o\8\y\n\9\9\r\x\q\l\9\q\w\h\o\l\9\5\a\t\u\f\u\4\7\j\r\f\r\6\f\1\s\h\d\g\z\x\o\j\q\u\p\z\p\f\5\a\p\5\e\d\3\f\m\o\g\c\m\k\z\6\0\m\c\e\y\8\q\a\b\r\u\a\i\p\2\c\z\s\n\r\p\h\c\6\b\d\g\v\1\5\p\j\n\w\x\v\l\g\u\s\k\i\x\4\4\6\y\8\9\r\z\w\t\j\e\n\s\p\g\p\l\q\8\u\2\r\0\s\d\0\j\d\1\y\2\j\f\m\c\1\z\0\u\f\u\f\j\t\n\d\q\y\g\r\y\v\0\2\h\n\o\6\m\y\f\1\m\c\7\0\b\y\l\q\m\y\z\7\i\b\x\2\x\p\h\b\z\k\x\9\w\5\g\o\1\1\t\6\9\p\q\7\j\m\d\a\m\3\4\8\g\i\q\k\b\6\4\r\p\u\2\k\0\i\8\g\i\a\u\p\w\f\i\p\3\u\v\v\r\h\b\n\p\b\7\4\r\h\y\n\t\5\e\e\q\o\4\o\6\3\0\n\8\i\l\o\g\6\v\1\6\k\5\6\a\j\u\n\s\r\1\r\b\7\v\n\8\o\3\z\u\7\e\g\5\4\n\b\s\0\r\7\f\5\7\5\c\o\d\0\s\5\5\w\q\f\a\v\y\j\i\3\u\n\a\7\y\j\7\f\5\x\e\p\i\5\v\t\f\c\1\p\a\x\k\1\1\4\d\t\a\r\k\w\z\j\o\o\4\x\c\p\g\8\t\q\3\m\7\q\n\5\u\f\a\l\b\m\v\x\a\j\h\g\9\x\d\g\2\v\w\s\p\m\g\5\q\h\a\1\d\3\6\s\s\m\z\o\f\p\z\p\e\7\s\l\3\7\6\a\a\o\q\4\3\z\6\k\o\y\c\v\k\b\5\q\o\f\c\3\w\u\4\z\v\8\1\u\x\w\5\n\y\f\o\e\f\i\8\z\9\3\2\u\9\o\8\6\y\u\e\7\j\y\4\t\j\3\k\s\b\0\g\p\a\r\k\4\x\8\8\j\3\n\u\r\j\m\h\f\8\e\g\b\s\d\x\p\6\6\o\x\y\t\g\h\e\s\p\g\e\m\e\7\g\m\6\j\i\j\s\r\0\4\2\j\k\n\f\3\q\2\0\1\u\u\e\e\s\8\x\d\e\e\a\8\q\t\k\f\5\5\n\a\2\v\2\g\f\b\b\f\f\l\8\e\5\k\y\f\7\i\f\8\l\9\t\5\i\5\4\1\p\j\g\7\0\6\7\f\j\d\k\f\g\2\t\7\p\d\5\l\l\3\8\6\e\m\p\7\w\b\9\s\4\0\u\f\5\r\0\s\q\n\6\s\5\z\k\1\w\v\6\9\1\4\a\i\j\o\5\a\0\d\7\i\f\3\x\l\l\p\t\t\x\c\6\b\m\9\x\n\a\d\u\1\4\2\o\1\3\a\4\8\t\9\s\e\o\w\r\4\b\o\v\n\z\0\3\f\9\f\f\g\v\g\v\t\j\8\9\k\s\3\7\9\1\v\2\v\r\4\f\a\f\f\i\j\j\v\w\3\8\c\7\n\r\2\2\y\x\v\l\d\4\7\2\f\7\4\r\t\s\g\m\m\x\d\y\a\9\9\m\t\e\a\0\f\i\z\y\8\h\a\3\r\d\t\s\y\g\e\y\1\q\m\3\r\p\w\n\4\i\v\f\t\h\e\w\x\2\3\d\o\w\6\2\g\g\r\g\u\x\6\d\q\e\i\d\l\2\9\f\w\w\d\3\f\4\y\x\4\3\k\2\a\d\m\s\t\2\s\h\p\j\6\y\y\d\v\y\8\w\s\n\e\m\a\e\y\9\h\j\b\b\b\q\5\j\9\8\u\l\g\f\v\6\s\9\b\1\c\z\l\a\t\t\t\u\j\5\c\q\1\m\s\j\o\0\9\9\z\9\a\f\2\i\2\v\4\w\7\y\o\2\q\e\j\w\e\0\6\u\l\q\r\l\5\o\4\f\t\r\f\e\c\u\4\3\r\0\m\u\e\o\j\y\z\e\b\4\z\0\l\a\l\u\9\i\k\k\j\b\k\r\h\1\q\z\k\0\1\b\l\b\i\m\a\c\f\2\u\8\0\c\i\l\z\b\k\0\o\f\m\o\i\b\0\k\a\o\2\f\a\6\1\j\8\6\c\1\4\m\c\t\1\z\d\8\x\m\p\g\o\k\u\8\u\y\i\k\i\g\k\y\2\e\x\0\e\0\9\a\q\1\e\g\x\i\y\z\c\9\p\t\9\7\u\q\c\n\k\7\7\4\u\j\v\6\l\p\c\m\q\b\k\l\f\9\t\y\p\8\e\j\t\4\j\p\q\r\k\p\7\d\8\s\q\1\e\g\2\b\n\7\7\v\7\1\8\z\4\e\0\n\w\c\m\6\u\6\z\o\y\p\1\z\p\9\l\s\g\k\k\k\6\4\z\3\9\5\z\l\3\h\3\2\6\1\1\z\5\y\l\2\9\n\e\i\p\t\m\n\5\4\n\r\f\v\3\e\0\9\f\a\w\r\3\t\3\d\0\7\y\z\q\q\0\4\9\o\9\g\x\p\t\w\z\x\5\z\2\e\v\g\h\u\5\s\f\2\o\1\d\a\y\v\u\f\m\z\o\o\a\y\e\w\u\o\m\1\b\w\t\d\s\l\k\d\r\6\p\3\w\s\z\q\7\e\7\5\p\5\r\o\y\3\j\u\5\w\d\c\3\7\k\s\g\f\h\7\w\c\2\6\0\4\o\6\m\a\o\0\3\h\b\k\0\m\6\o\8\2\j\d\4\3\g\a\i\h\u\v\z\o\5\c\x\o\z\c\g\l\i\j\l\r\n\y\c\j\4\8\0\k\p\c\c\y\1\g\b\m\6\o\8\y\2\m\9\q\n\t\q\s\p\a\s\a\q\a\7\8\4\m\2\3\s\c\k\f\q\t\e\v\m\s\o\y\i\7\y\f\7\d\o\8\4\s\s\5\v\x\o\1\z\g\8\f\6\k\4\q\v\j\c\o\t\o\g\k\t\g\c\8\f\i\e\8\a\d\v\n\z\8\j\l\2\w\8\n\y\1\w\o\5\d\1\k\1\q\z\v\j\y\h\p\y\y\o\y\4\g\8\4\3\3\j\6\w\f\r\3\8\c\f\r\w\7\2\f\h\v\n\4\5\3\b\9\r\u\k\u\i\t\6\o\g\x\f\2\f\h\6\d\x\b\6\l\8\1\p\g\z\q\e\j\s\l\q\z\9\a\1\1\m\6\r\j\m\h\q\i\9\n\2\o\q\o\w\7\i\g\4\s\a\j\r\b\c\3\m\p\3\r\7\r\c\e\v\v\2\a\5\3\s\i\m\z\p\2\8\d\y\3\n\l\4\q\d\z\f\m\8\u\1\n\k\c\m\p\j\h\h\n\5\k\2\9\f\w\4\p\9\d\1\6\7\b\b\c\e\c\q\6\l\q\x\z\7\y\6\r\x\1\q\9\v\7\n\j\x\9\0\r\3\3\i\f\a\s\x\j\p\3\h\q\g\l\q\r\h\p\f\p\r\j\n\v\h\d\n\l\d\h\m\b\5\f\o\l\o\o\x\w\u\y\s\g\u\h\f\c\b\u\u\t\l\w\m\5\6\k\9\l\4\z\u\m\8\n\n\i\w\0\t\o\d\1\x\7\4\3\4\y\z\y\9\v\p\1\s\e\2\1\l\s\7\0\8\y\5\6\s\b\5\k\w\g\9\4\h\x\p\c\y\8\a\q\a\h\4\c\g\p\f\v\9\u\p\k\q\i\r\z\s\f\7\t\8\r\9\q\8\v\v\l\8\t\l\g\b\0\3\q\z\f\s\5\s\d\t\e\p\m\a\1\g\r\6\8\i\v\b\8\r\l\8\c\2\i\t\7\6\o\t\1\9\r\b\0\l\x\9\x\a\a\n\r\t\l\o\4\j\2\w\t\0\o\u\t\b\e\c\5\j\5\k\s\m\z\p\1\c\9\v\r\y\2\t\d\c\2\x\t\a\q\y\r\p\y\j\1\e\k\c\k\s\u\z\b\3\q\e\r\2\i\5\j\4\8\s\y\z\m\p\b\4\4\v\y\4\0\0\o\c\w\6\x\k\x\u\b\r\w\l\8\n\7\d\g\g\9\e\g\u\t\r\a\u\u\e\1\3\b\k\1\o\x\3\5\s\4\r\c\5\1\z\2\m\v\7\m\r\s\h\w\y\n\6\i\a\g\z\x\w\f\e\5\d\4\j\t\w\p\r\8\9\b\p\8\o\u\a\h\n\x\0\y\0\u\l\3\i\i\u\k\b\f\l\y\r\a\n\l\8\j\q\n\b\m\x\s\r\l\1\u\f\9\1\j\q\4\6\g\k\z\h\8\w\m\g\q\9\o\o\p\a\z\v\d\i\g\0\x\2\2\2\w\t\3\x\j\a\i\h\3\c\l\3\z\9\j\1\q\q\a\n\1\m\m\a\l\e\y\r\s\q\r\p\2\n\m\6\6\z\8\0\1\s\t\m\v\b\g\6\s\a\j\p\k\w\2\k\h\l\j\e\t\1\3\a\g\c\c\3\9\1\0\n\o\7\f\u\0\6\m\9\u\x\7\w\5\k\z\j\7\4\n\w\s\q\y\z\9\b\j\t\m\h\6\k\s\g\h\c\i\2\2\r\x\y\v\c\g\y\b\b\6\7\v\i\w\u\6\h\c\a\j\c\0\p\m\s\3\t\2\e\b\2\2\c\7\l\c\i\j\d\p\r\8\r\2\6\f\4\7\m\x\2\j\h\h\e\o\v\6\4\y\l\i\j\n\5\f\8\n\h\8\q\6\5\8\x\k\x\3\r\8\8\p\7\6\a\x\x\i\2\d\q\7\5\0\y\h\1\j\s\z\3\x\9\h\d\m\6\w\k\o\w\6\c\f\r\2\m\g\k\2\8\b\n\q\a\9\t\7\5\p\a\a\m\h\m\5\l\6\3\c\e\y\a\j\b\u\j\j\3\a\s\s\g\i\h\j\p\w\o\8\d\x\z\y\w\z\m\q\m\p\8\z\e\t\w\d\8\n\t\g\j\h\a\u\j\7\4\y\6\q\5\8\5\c\c\6\l\y\0\r\u\a\i\s\u\r\e\u\x\w\h\r\3\u\d\k\u\r\i\a\x\y\n\j\o\d\s\9\p\k\g\e\h\3\5\n\e\9\y\9\1\1\a\p\9\h\s\x\c\c\j\q\6\5\2\y\q\i\h\k\x\w\j\v\i\f\v\n\z\b\m\x\8\s\3\p\g\g\b\0\5\c\p\y\6\o\q\b\2\5\k\c\1\f\3\b\i\f\7\2\h\m\v\m\t\9\a\4\0\s\u\9\n\i\7\p\p\u\1\f\n\d\t\z\a\t\9\m\f\h\s\p\0\e\x\k\y\8\z\l\r\z\0\9\o\9\b\h\f\n\e\l\j\h\2\l\y\g\1\b\z\y\x\8\1\q\w\0\w\9\g\0\s\w\2\3\t\3\o\j\a\e\y\4\o\v\2\s\n\3\3\5\h\r\n\0\p\1\s\k\0\8\c\i\5\5\4\1\5\k\9\q\w\l\e\u\h\4\s\k\1\d\0\l\k\9\a\8\j\4\p\b\a\o\e\8\1\f\9\t\k\b\e\4\h\3\y\t\w\j\g\v\k\5\0\x\t\b\t\d\9\g\u\r\h\b\o\d\n\m\z\o\u\f\9\0\6\4\9\j\l\u\8\q\6\g\p\w\y\i\h\9\a\6\o\d\3\4\q\g\r\v\w\1\c\n\p\1\x\f\k\o\r\s\2\h\6\y\3\v\7\j\s\m\n\8\e\q\v\8\7\1\8\u\n\u\m\j\o\d\m\w\2\e\w\o\v\f\2\5\c\1\8\l\a\8\b\m\9\c\n\z\l\l\d\r\i\w\r\8\v\x\m\r\o\0\9\i\9\g\6\c\a\0\i\2\5\6\p\f\m\h\z\n\0\n\7\h\3\u\y\l\d\y\m\c\w\g\0\h\k\2\5\u\s\j\u\d\j\6\4\l\s\3\e\8\m\4\9\1\c\q\b\f\d\d\8\x\p\0\6\u\y\g\g\1\v\x\m\w\5\1\5\2\4\7\9\z\2\l\s\y\8\p\q\q\r\t\e\1\3\d\l\i\u\2\p\n\z\c\q\n\p\7\r\v\8\h\l\n\j\x\a\c\t\r\w\b\u\f\n\k\w\6\b\c\i\h\5\g\9\q\0\0\t\q\5\f\e\0\4\7\s\x\g\w\t\r\t\3\y\q\k\x\u\h\f\y\r\4\4\4\o\a\7\8\n\o\4\i\m\f\n\a\8\n\0\r\c\g\n\f\9\w\p\t\g\i\k\d\r\5\6\3\8\t\d\4\f ]] 00:05:59.893 00:05:59.893 real 0m1.727s 00:05:59.893 user 0m1.203s 00:05:59.893 sys 0m0.813s 00:05:59.893 10:44:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.893 10:44:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:59.893 ************************************ 00:05:59.893 END TEST dd_rw_offset 00:05:59.893 ************************************ 00:05:59.893 10:44:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:05:59.893 10:44:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:05:59.893 10:44:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:59.893 10:44:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:59.893 10:44:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:05:59.893 10:44:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:59.893 10:44:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:05:59.893 10:44:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:59.893 10:44:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:05:59.893 10:44:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:59.893 10:44:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:59.893 { 00:05:59.893 "subsystems": [ 00:05:59.893 { 00:05:59.893 "subsystem": "bdev", 00:05:59.893 "config": [ 00:05:59.893 { 00:05:59.893 "params": { 00:05:59.893 "trtype": "pcie", 00:05:59.893 "traddr": "0000:00:10.0", 00:05:59.893 "name": "Nvme0" 00:05:59.893 }, 00:05:59.893 "method": "bdev_nvme_attach_controller" 00:05:59.893 }, 00:05:59.893 { 00:05:59.893 "method": "bdev_wait_for_examine" 00:05:59.893 } 00:05:59.893 ] 00:05:59.893 } 00:05:59.893 ] 00:05:59.893 } 00:05:59.893 [2024-07-25 10:44:29.456112] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:59.893 [2024-07-25 10:44:29.456388] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62083 ] 00:05:59.893 [2024-07-25 10:44:29.594002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.153 [2024-07-25 10:44:29.747443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.153 [2024-07-25 10:44:29.828951] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:00.671  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:00.671 00:06:00.671 10:44:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:00.671 ************************************ 00:06:00.671 END TEST spdk_dd_basic_rw 00:06:00.671 ************************************ 00:06:00.671 00:06:00.671 real 0m22.238s 00:06:00.671 user 0m15.997s 00:06:00.671 sys 0m8.852s 00:06:00.671 10:44:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.671 10:44:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:00.671 10:44:30 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:00.671 10:44:30 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.671 10:44:30 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.671 10:44:30 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:00.671 ************************************ 00:06:00.671 START TEST spdk_dd_posix 00:06:00.671 ************************************ 00:06:00.671 10:44:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:00.931 * Looking for test storage... 00:06:00.931 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:00.931 10:44:30 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:00.931 10:44:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:00.931 10:44:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:00.931 10:44:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:00.931 10:44:30 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.931 10:44:30 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.931 10:44:30 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.931 10:44:30 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:06:00.931 10:44:30 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.931 10:44:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:06:00.931 10:44:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:06:00.931 10:44:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:06:00.931 10:44:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:06:00.931 10:44:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:00.931 10:44:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:00.931 10:44:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:06:00.931 10:44:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:06:00.931 * First test run, liburing in use 00:06:00.931 10:44:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:06:00.931 10:44:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.931 10:44:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.931 10:44:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:00.931 ************************************ 00:06:00.931 START TEST dd_flag_append 00:06:00.931 ************************************ 00:06:00.931 10:44:30 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1125 -- # append 00:06:00.931 10:44:30 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:06:00.931 10:44:30 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:06:00.931 10:44:30 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:06:00.931 10:44:30 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:00.931 10:44:30 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:00.931 10:44:30 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=ulp3dwg1o4i900fd6nevauf3yiwxyw48 00:06:00.931 10:44:30 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:06:00.931 10:44:30 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:00.931 10:44:30 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:00.931 10:44:30 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=w94nb9urmga54n5wqcfqy6i91eezm8rr 00:06:00.931 10:44:30 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s ulp3dwg1o4i900fd6nevauf3yiwxyw48 00:06:00.931 10:44:30 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s w94nb9urmga54n5wqcfqy6i91eezm8rr 00:06:00.931 10:44:30 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:00.931 [2024-07-25 10:44:30.554661] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:00.931 [2024-07-25 10:44:30.554801] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62147 ] 00:06:01.190 [2024-07-25 10:44:30.693248] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.190 [2024-07-25 10:44:30.845710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.190 [2024-07-25 10:44:30.924005] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:01.707  Copying: 32/32 [B] (average 31 kBps) 00:06:01.707 00:06:01.707 10:44:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ w94nb9urmga54n5wqcfqy6i91eezm8rrulp3dwg1o4i900fd6nevauf3yiwxyw48 == \w\9\4\n\b\9\u\r\m\g\a\5\4\n\5\w\q\c\f\q\y\6\i\9\1\e\e\z\m\8\r\r\u\l\p\3\d\w\g\1\o\4\i\9\0\0\f\d\6\n\e\v\a\u\f\3\y\i\w\x\y\w\4\8 ]] 00:06:01.707 00:06:01.707 real 0m0.824s 00:06:01.707 user 0m0.523s 00:06:01.707 sys 0m0.379s 00:06:01.707 ************************************ 00:06:01.707 END TEST dd_flag_append 00:06:01.707 ************************************ 00:06:01.707 10:44:31 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.707 10:44:31 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:01.707 10:44:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:06:01.707 10:44:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.707 10:44:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.707 10:44:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:01.707 ************************************ 00:06:01.707 START TEST dd_flag_directory 00:06:01.707 ************************************ 00:06:01.707 10:44:31 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1125 -- # directory 00:06:01.707 10:44:31 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:01.707 10:44:31 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:06:01.707 10:44:31 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:01.707 10:44:31 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:01.707 10:44:31 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.707 10:44:31 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:01.707 10:44:31 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.707 10:44:31 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:01.707 10:44:31 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.707 10:44:31 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:01.707 10:44:31 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:01.707 10:44:31 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:01.707 [2024-07-25 10:44:31.426533] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:01.707 [2024-07-25 10:44:31.426691] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62181 ] 00:06:01.966 [2024-07-25 10:44:31.564861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.966 [2024-07-25 10:44:31.694037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.224 [2024-07-25 10:44:31.772651] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:02.224 [2024-07-25 10:44:31.817196] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:02.224 [2024-07-25 10:44:31.817247] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:02.224 [2024-07-25 10:44:31.817275] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:02.483 [2024-07-25 10:44:31.987708] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:02.483 10:44:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:06:02.483 10:44:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:02.483 10:44:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:06:02.483 10:44:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:06:02.483 10:44:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:06:02.483 10:44:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:02.483 10:44:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:02.483 10:44:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:06:02.483 10:44:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:02.483 10:44:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:02.483 10:44:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.483 10:44:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:02.483 10:44:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.483 10:44:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:02.483 10:44:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.483 10:44:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:02.483 10:44:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:02.483 10:44:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:02.483 [2024-07-25 10:44:32.184351] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:02.483 [2024-07-25 10:44:32.184485] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62196 ] 00:06:02.741 [2024-07-25 10:44:32.320732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.741 [2024-07-25 10:44:32.448146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.000 [2024-07-25 10:44:32.525632] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:03.000 [2024-07-25 10:44:32.572183] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:03.000 [2024-07-25 10:44:32.572242] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:03.000 [2024-07-25 10:44:32.572272] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:03.259 [2024-07-25 10:44:32.738783] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:03.259 10:44:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:06:03.259 10:44:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:03.259 10:44:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:06:03.259 10:44:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:06:03.259 10:44:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:06:03.259 10:44:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:03.259 00:06:03.259 real 0m1.501s 00:06:03.259 user 0m0.901s 00:06:03.259 sys 0m0.387s 00:06:03.259 10:44:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.259 10:44:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:06:03.259 ************************************ 00:06:03.259 END TEST dd_flag_directory 00:06:03.259 ************************************ 00:06:03.259 10:44:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:06:03.259 10:44:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.259 10:44:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.259 10:44:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:03.259 ************************************ 00:06:03.259 START TEST dd_flag_nofollow 00:06:03.259 ************************************ 00:06:03.259 10:44:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1125 -- # nofollow 00:06:03.259 10:44:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:03.259 10:44:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:03.259 10:44:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:03.259 10:44:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:03.259 10:44:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:03.259 10:44:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:06:03.259 10:44:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:03.259 10:44:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:03.259 10:44:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.259 10:44:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:03.259 10:44:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.259 10:44:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:03.259 10:44:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.259 10:44:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:03.259 10:44:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:03.259 10:44:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:03.259 [2024-07-25 10:44:32.993638] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:03.259 [2024-07-25 10:44:32.993781] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62229 ] 00:06:03.518 [2024-07-25 10:44:33.135171] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.777 [2024-07-25 10:44:33.268500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.777 [2024-07-25 10:44:33.344417] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:03.777 [2024-07-25 10:44:33.389571] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:03.777 [2024-07-25 10:44:33.389640] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:03.777 [2024-07-25 10:44:33.389655] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:04.035 [2024-07-25 10:44:33.554297] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:04.035 10:44:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:06:04.035 10:44:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:04.035 10:44:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:06:04.035 10:44:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:06:04.035 10:44:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:06:04.035 10:44:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:04.035 10:44:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:04.035 10:44:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:06:04.035 10:44:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:04.035 10:44:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:04.035 10:44:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.035 10:44:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:04.035 10:44:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.035 10:44:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:04.035 10:44:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.035 10:44:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:04.035 10:44:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:04.035 10:44:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:04.035 [2024-07-25 10:44:33.751975] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:04.035 [2024-07-25 10:44:33.752139] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62240 ] 00:06:04.293 [2024-07-25 10:44:33.889669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.552 [2024-07-25 10:44:34.049606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.552 [2024-07-25 10:44:34.134178] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:04.552 [2024-07-25 10:44:34.186402] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:04.552 [2024-07-25 10:44:34.186462] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:04.552 [2024-07-25 10:44:34.186479] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:04.810 [2024-07-25 10:44:34.362989] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:04.810 10:44:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:06:04.810 10:44:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:04.810 10:44:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:06:04.810 10:44:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:06:04.810 10:44:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:06:04.810 10:44:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:04.810 10:44:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:06:04.810 10:44:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:06:04.810 10:44:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:04.810 10:44:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:05.069 [2024-07-25 10:44:34.594676] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:05.069 [2024-07-25 10:44:34.594811] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62249 ] 00:06:05.069 [2024-07-25 10:44:34.733146] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.327 [2024-07-25 10:44:34.886386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.327 [2024-07-25 10:44:34.965222] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:05.587  Copying: 512/512 [B] (average 500 kBps) 00:06:05.587 00:06:05.587 10:44:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ lpakqjwxd6uwfkrtsdqn7r4ynb4xa9wc542dngcn7s7idlhc8li8dkr6glf4j1h8oengtlsf1c4ltya0zxq332opw8co9jsa7bfi07ma1sknbsqxqglis0trcmxn7jtnmye5b06l7lrty8msa0l7z17rer14aaoiw7w2k41hgcl5hwf1z0f92ombrep9k9h8gaj5smqesiqh000ywemwou701dhj01mcu0y4xgy3qy96q0j2a2om0q5bcsibkbho5p9pwmy88g8nqy4n6zkuljiqfjqungvypt7zol0skuu9kvdkomcudl96v973b5dbb63t57y8r8qeleod2id4b2wec3ynrhos6v97rlkl7bxb04tidv9f79o2a1yoe1j9swkqcoeh3fwtwiromsi6b06163ros55edad2yv6mydr09d2jalsbd1bnjqzk6wy85hqbpj0sc2gkzgibjicbs3olan1325sg30qwzqgjv1ef8bgqqbguge8udiuqw9qe == \l\p\a\k\q\j\w\x\d\6\u\w\f\k\r\t\s\d\q\n\7\r\4\y\n\b\4\x\a\9\w\c\5\4\2\d\n\g\c\n\7\s\7\i\d\l\h\c\8\l\i\8\d\k\r\6\g\l\f\4\j\1\h\8\o\e\n\g\t\l\s\f\1\c\4\l\t\y\a\0\z\x\q\3\3\2\o\p\w\8\c\o\9\j\s\a\7\b\f\i\0\7\m\a\1\s\k\n\b\s\q\x\q\g\l\i\s\0\t\r\c\m\x\n\7\j\t\n\m\y\e\5\b\0\6\l\7\l\r\t\y\8\m\s\a\0\l\7\z\1\7\r\e\r\1\4\a\a\o\i\w\7\w\2\k\4\1\h\g\c\l\5\h\w\f\1\z\0\f\9\2\o\m\b\r\e\p\9\k\9\h\8\g\a\j\5\s\m\q\e\s\i\q\h\0\0\0\y\w\e\m\w\o\u\7\0\1\d\h\j\0\1\m\c\u\0\y\4\x\g\y\3\q\y\9\6\q\0\j\2\a\2\o\m\0\q\5\b\c\s\i\b\k\b\h\o\5\p\9\p\w\m\y\8\8\g\8\n\q\y\4\n\6\z\k\u\l\j\i\q\f\j\q\u\n\g\v\y\p\t\7\z\o\l\0\s\k\u\u\9\k\v\d\k\o\m\c\u\d\l\9\6\v\9\7\3\b\5\d\b\b\6\3\t\5\7\y\8\r\8\q\e\l\e\o\d\2\i\d\4\b\2\w\e\c\3\y\n\r\h\o\s\6\v\9\7\r\l\k\l\7\b\x\b\0\4\t\i\d\v\9\f\7\9\o\2\a\1\y\o\e\1\j\9\s\w\k\q\c\o\e\h\3\f\w\t\w\i\r\o\m\s\i\6\b\0\6\1\6\3\r\o\s\5\5\e\d\a\d\2\y\v\6\m\y\d\r\0\9\d\2\j\a\l\s\b\d\1\b\n\j\q\z\k\6\w\y\8\5\h\q\b\p\j\0\s\c\2\g\k\z\g\i\b\j\i\c\b\s\3\o\l\a\n\1\3\2\5\s\g\3\0\q\w\z\q\g\j\v\1\e\f\8\b\g\q\q\b\g\u\g\e\8\u\d\i\u\q\w\9\q\e ]] 00:06:05.587 00:06:05.587 real 0m2.376s 00:06:05.587 user 0m1.440s 00:06:05.587 sys 0m0.793s 00:06:05.587 ************************************ 00:06:05.587 END TEST dd_flag_nofollow 00:06:05.587 ************************************ 00:06:05.587 10:44:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.587 10:44:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:05.845 10:44:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:06:05.845 10:44:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.845 10:44:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.845 10:44:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:05.845 ************************************ 00:06:05.845 START TEST dd_flag_noatime 00:06:05.845 ************************************ 00:06:05.845 10:44:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1125 -- # noatime 00:06:05.845 10:44:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:06:05.845 10:44:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:06:05.845 10:44:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:06:05.845 10:44:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:06:05.845 10:44:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:05.845 10:44:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:05.845 10:44:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1721904275 00:06:05.845 10:44:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:05.845 10:44:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1721904275 00:06:05.845 10:44:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:06:06.782 10:44:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:06.782 [2024-07-25 10:44:36.438822] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:06.782 [2024-07-25 10:44:36.439006] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62297 ] 00:06:07.041 [2024-07-25 10:44:36.577845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.041 [2024-07-25 10:44:36.714973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.300 [2024-07-25 10:44:36.792363] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:07.559  Copying: 512/512 [B] (average 500 kBps) 00:06:07.559 00:06:07.559 10:44:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:07.559 10:44:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1721904275 )) 00:06:07.559 10:44:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:07.559 10:44:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1721904275 )) 00:06:07.559 10:44:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:07.559 [2024-07-25 10:44:37.193407] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:07.559 [2024-07-25 10:44:37.193493] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62315 ] 00:06:07.817 [2024-07-25 10:44:37.331777] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.817 [2024-07-25 10:44:37.451719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.817 [2024-07-25 10:44:37.531295] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:08.334  Copying: 512/512 [B] (average 500 kBps) 00:06:08.334 00:06:08.334 10:44:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:08.334 10:44:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1721904277 )) 00:06:08.334 00:06:08.334 real 0m2.532s 00:06:08.334 user 0m0.905s 00:06:08.334 sys 0m0.768s 00:06:08.334 10:44:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.334 ************************************ 00:06:08.334 END TEST dd_flag_noatime 00:06:08.334 ************************************ 00:06:08.334 10:44:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:08.334 10:44:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:06:08.334 10:44:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.334 10:44:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.334 10:44:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:08.334 ************************************ 00:06:08.334 START TEST dd_flags_misc 00:06:08.334 ************************************ 00:06:08.334 10:44:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1125 -- # io 00:06:08.334 10:44:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:08.334 10:44:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:08.334 10:44:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:08.334 10:44:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:08.334 10:44:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:08.334 10:44:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:08.334 10:44:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:08.334 10:44:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:08.334 10:44:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:08.334 [2024-07-25 10:44:37.999294] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:08.334 [2024-07-25 10:44:37.999402] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62345 ] 00:06:08.593 [2024-07-25 10:44:38.139988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.593 [2024-07-25 10:44:38.271293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.852 [2024-07-25 10:44:38.346852] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:09.111  Copying: 512/512 [B] (average 500 kBps) 00:06:09.111 00:06:09.111 10:44:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ uenmkcg7t0t9gjjo2fd9a7vaprqpllsthux925l3yuodsttzvquxxmc9dbud5boisch243rmdtpb4eym95uhxdciuewcwsuzo0gih7ckrpx5b85biqxgvvf2yjivvhai7pbwq6sdff8vl2mcegman38ev68pbuwkt16rvyse7bj4v7ummomnxlwxlexmxjveayoz40wzbdh0b764bpi9bc00vkd2djhjzvcyr6yk1a1g0e0kq5q86zoaw1mhnd1yd27ajwm0q07fr7xui46wuyl63cz2f30i1cmsjr0gmlklvv0bewem5aa3u7s3rh565y6tmxpw8udxt8cgnwbk2z00bqmvtham9ubinklgkjctkkoyarh11oj9ty9q2dbwh6yklr0jme8qaip4diiypwfka7bbsssv9dyziiwdy47msxlo97srn9xwpu19qp5fel0s4moxf863fbf78awl8prr328e735z8h011d8os4ar6mtwbvvd2f9d3fyxlkjw == \u\e\n\m\k\c\g\7\t\0\t\9\g\j\j\o\2\f\d\9\a\7\v\a\p\r\q\p\l\l\s\t\h\u\x\9\2\5\l\3\y\u\o\d\s\t\t\z\v\q\u\x\x\m\c\9\d\b\u\d\5\b\o\i\s\c\h\2\4\3\r\m\d\t\p\b\4\e\y\m\9\5\u\h\x\d\c\i\u\e\w\c\w\s\u\z\o\0\g\i\h\7\c\k\r\p\x\5\b\8\5\b\i\q\x\g\v\v\f\2\y\j\i\v\v\h\a\i\7\p\b\w\q\6\s\d\f\f\8\v\l\2\m\c\e\g\m\a\n\3\8\e\v\6\8\p\b\u\w\k\t\1\6\r\v\y\s\e\7\b\j\4\v\7\u\m\m\o\m\n\x\l\w\x\l\e\x\m\x\j\v\e\a\y\o\z\4\0\w\z\b\d\h\0\b\7\6\4\b\p\i\9\b\c\0\0\v\k\d\2\d\j\h\j\z\v\c\y\r\6\y\k\1\a\1\g\0\e\0\k\q\5\q\8\6\z\o\a\w\1\m\h\n\d\1\y\d\2\7\a\j\w\m\0\q\0\7\f\r\7\x\u\i\4\6\w\u\y\l\6\3\c\z\2\f\3\0\i\1\c\m\s\j\r\0\g\m\l\k\l\v\v\0\b\e\w\e\m\5\a\a\3\u\7\s\3\r\h\5\6\5\y\6\t\m\x\p\w\8\u\d\x\t\8\c\g\n\w\b\k\2\z\0\0\b\q\m\v\t\h\a\m\9\u\b\i\n\k\l\g\k\j\c\t\k\k\o\y\a\r\h\1\1\o\j\9\t\y\9\q\2\d\b\w\h\6\y\k\l\r\0\j\m\e\8\q\a\i\p\4\d\i\i\y\p\w\f\k\a\7\b\b\s\s\s\v\9\d\y\z\i\i\w\d\y\4\7\m\s\x\l\o\9\7\s\r\n\9\x\w\p\u\1\9\q\p\5\f\e\l\0\s\4\m\o\x\f\8\6\3\f\b\f\7\8\a\w\l\8\p\r\r\3\2\8\e\7\3\5\z\8\h\0\1\1\d\8\o\s\4\a\r\6\m\t\w\b\v\v\d\2\f\9\d\3\f\y\x\l\k\j\w ]] 00:06:09.111 10:44:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:09.111 10:44:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:09.111 [2024-07-25 10:44:38.735414] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:09.111 [2024-07-25 10:44:38.735536] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62354 ] 00:06:09.370 [2024-07-25 10:44:38.872648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.370 [2024-07-25 10:44:38.995847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.370 [2024-07-25 10:44:39.071831] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:09.888  Copying: 512/512 [B] (average 500 kBps) 00:06:09.888 00:06:09.888 10:44:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ uenmkcg7t0t9gjjo2fd9a7vaprqpllsthux925l3yuodsttzvquxxmc9dbud5boisch243rmdtpb4eym95uhxdciuewcwsuzo0gih7ckrpx5b85biqxgvvf2yjivvhai7pbwq6sdff8vl2mcegman38ev68pbuwkt16rvyse7bj4v7ummomnxlwxlexmxjveayoz40wzbdh0b764bpi9bc00vkd2djhjzvcyr6yk1a1g0e0kq5q86zoaw1mhnd1yd27ajwm0q07fr7xui46wuyl63cz2f30i1cmsjr0gmlklvv0bewem5aa3u7s3rh565y6tmxpw8udxt8cgnwbk2z00bqmvtham9ubinklgkjctkkoyarh11oj9ty9q2dbwh6yklr0jme8qaip4diiypwfka7bbsssv9dyziiwdy47msxlo97srn9xwpu19qp5fel0s4moxf863fbf78awl8prr328e735z8h011d8os4ar6mtwbvvd2f9d3fyxlkjw == \u\e\n\m\k\c\g\7\t\0\t\9\g\j\j\o\2\f\d\9\a\7\v\a\p\r\q\p\l\l\s\t\h\u\x\9\2\5\l\3\y\u\o\d\s\t\t\z\v\q\u\x\x\m\c\9\d\b\u\d\5\b\o\i\s\c\h\2\4\3\r\m\d\t\p\b\4\e\y\m\9\5\u\h\x\d\c\i\u\e\w\c\w\s\u\z\o\0\g\i\h\7\c\k\r\p\x\5\b\8\5\b\i\q\x\g\v\v\f\2\y\j\i\v\v\h\a\i\7\p\b\w\q\6\s\d\f\f\8\v\l\2\m\c\e\g\m\a\n\3\8\e\v\6\8\p\b\u\w\k\t\1\6\r\v\y\s\e\7\b\j\4\v\7\u\m\m\o\m\n\x\l\w\x\l\e\x\m\x\j\v\e\a\y\o\z\4\0\w\z\b\d\h\0\b\7\6\4\b\p\i\9\b\c\0\0\v\k\d\2\d\j\h\j\z\v\c\y\r\6\y\k\1\a\1\g\0\e\0\k\q\5\q\8\6\z\o\a\w\1\m\h\n\d\1\y\d\2\7\a\j\w\m\0\q\0\7\f\r\7\x\u\i\4\6\w\u\y\l\6\3\c\z\2\f\3\0\i\1\c\m\s\j\r\0\g\m\l\k\l\v\v\0\b\e\w\e\m\5\a\a\3\u\7\s\3\r\h\5\6\5\y\6\t\m\x\p\w\8\u\d\x\t\8\c\g\n\w\b\k\2\z\0\0\b\q\m\v\t\h\a\m\9\u\b\i\n\k\l\g\k\j\c\t\k\k\o\y\a\r\h\1\1\o\j\9\t\y\9\q\2\d\b\w\h\6\y\k\l\r\0\j\m\e\8\q\a\i\p\4\d\i\i\y\p\w\f\k\a\7\b\b\s\s\s\v\9\d\y\z\i\i\w\d\y\4\7\m\s\x\l\o\9\7\s\r\n\9\x\w\p\u\1\9\q\p\5\f\e\l\0\s\4\m\o\x\f\8\6\3\f\b\f\7\8\a\w\l\8\p\r\r\3\2\8\e\7\3\5\z\8\h\0\1\1\d\8\o\s\4\a\r\6\m\t\w\b\v\v\d\2\f\9\d\3\f\y\x\l\k\j\w ]] 00:06:09.888 10:44:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:09.888 10:44:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:09.888 [2024-07-25 10:44:39.504568] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:09.888 [2024-07-25 10:44:39.504680] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62369 ] 00:06:10.154 [2024-07-25 10:44:39.643944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.154 [2024-07-25 10:44:39.769796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.154 [2024-07-25 10:44:39.850320] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:10.671  Copying: 512/512 [B] (average 100 kBps) 00:06:10.671 00:06:10.671 10:44:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ uenmkcg7t0t9gjjo2fd9a7vaprqpllsthux925l3yuodsttzvquxxmc9dbud5boisch243rmdtpb4eym95uhxdciuewcwsuzo0gih7ckrpx5b85biqxgvvf2yjivvhai7pbwq6sdff8vl2mcegman38ev68pbuwkt16rvyse7bj4v7ummomnxlwxlexmxjveayoz40wzbdh0b764bpi9bc00vkd2djhjzvcyr6yk1a1g0e0kq5q86zoaw1mhnd1yd27ajwm0q07fr7xui46wuyl63cz2f30i1cmsjr0gmlklvv0bewem5aa3u7s3rh565y6tmxpw8udxt8cgnwbk2z00bqmvtham9ubinklgkjctkkoyarh11oj9ty9q2dbwh6yklr0jme8qaip4diiypwfka7bbsssv9dyziiwdy47msxlo97srn9xwpu19qp5fel0s4moxf863fbf78awl8prr328e735z8h011d8os4ar6mtwbvvd2f9d3fyxlkjw == \u\e\n\m\k\c\g\7\t\0\t\9\g\j\j\o\2\f\d\9\a\7\v\a\p\r\q\p\l\l\s\t\h\u\x\9\2\5\l\3\y\u\o\d\s\t\t\z\v\q\u\x\x\m\c\9\d\b\u\d\5\b\o\i\s\c\h\2\4\3\r\m\d\t\p\b\4\e\y\m\9\5\u\h\x\d\c\i\u\e\w\c\w\s\u\z\o\0\g\i\h\7\c\k\r\p\x\5\b\8\5\b\i\q\x\g\v\v\f\2\y\j\i\v\v\h\a\i\7\p\b\w\q\6\s\d\f\f\8\v\l\2\m\c\e\g\m\a\n\3\8\e\v\6\8\p\b\u\w\k\t\1\6\r\v\y\s\e\7\b\j\4\v\7\u\m\m\o\m\n\x\l\w\x\l\e\x\m\x\j\v\e\a\y\o\z\4\0\w\z\b\d\h\0\b\7\6\4\b\p\i\9\b\c\0\0\v\k\d\2\d\j\h\j\z\v\c\y\r\6\y\k\1\a\1\g\0\e\0\k\q\5\q\8\6\z\o\a\w\1\m\h\n\d\1\y\d\2\7\a\j\w\m\0\q\0\7\f\r\7\x\u\i\4\6\w\u\y\l\6\3\c\z\2\f\3\0\i\1\c\m\s\j\r\0\g\m\l\k\l\v\v\0\b\e\w\e\m\5\a\a\3\u\7\s\3\r\h\5\6\5\y\6\t\m\x\p\w\8\u\d\x\t\8\c\g\n\w\b\k\2\z\0\0\b\q\m\v\t\h\a\m\9\u\b\i\n\k\l\g\k\j\c\t\k\k\o\y\a\r\h\1\1\o\j\9\t\y\9\q\2\d\b\w\h\6\y\k\l\r\0\j\m\e\8\q\a\i\p\4\d\i\i\y\p\w\f\k\a\7\b\b\s\s\s\v\9\d\y\z\i\i\w\d\y\4\7\m\s\x\l\o\9\7\s\r\n\9\x\w\p\u\1\9\q\p\5\f\e\l\0\s\4\m\o\x\f\8\6\3\f\b\f\7\8\a\w\l\8\p\r\r\3\2\8\e\7\3\5\z\8\h\0\1\1\d\8\o\s\4\a\r\6\m\t\w\b\v\v\d\2\f\9\d\3\f\y\x\l\k\j\w ]] 00:06:10.671 10:44:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:10.671 10:44:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:10.671 [2024-07-25 10:44:40.240754] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:10.671 [2024-07-25 10:44:40.240889] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62379 ] 00:06:10.671 [2024-07-25 10:44:40.374442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.929 [2024-07-25 10:44:40.484352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.929 [2024-07-25 10:44:40.559931] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:11.187  Copying: 512/512 [B] (average 250 kBps) 00:06:11.187 00:06:11.187 10:44:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ uenmkcg7t0t9gjjo2fd9a7vaprqpllsthux925l3yuodsttzvquxxmc9dbud5boisch243rmdtpb4eym95uhxdciuewcwsuzo0gih7ckrpx5b85biqxgvvf2yjivvhai7pbwq6sdff8vl2mcegman38ev68pbuwkt16rvyse7bj4v7ummomnxlwxlexmxjveayoz40wzbdh0b764bpi9bc00vkd2djhjzvcyr6yk1a1g0e0kq5q86zoaw1mhnd1yd27ajwm0q07fr7xui46wuyl63cz2f30i1cmsjr0gmlklvv0bewem5aa3u7s3rh565y6tmxpw8udxt8cgnwbk2z00bqmvtham9ubinklgkjctkkoyarh11oj9ty9q2dbwh6yklr0jme8qaip4diiypwfka7bbsssv9dyziiwdy47msxlo97srn9xwpu19qp5fel0s4moxf863fbf78awl8prr328e735z8h011d8os4ar6mtwbvvd2f9d3fyxlkjw == \u\e\n\m\k\c\g\7\t\0\t\9\g\j\j\o\2\f\d\9\a\7\v\a\p\r\q\p\l\l\s\t\h\u\x\9\2\5\l\3\y\u\o\d\s\t\t\z\v\q\u\x\x\m\c\9\d\b\u\d\5\b\o\i\s\c\h\2\4\3\r\m\d\t\p\b\4\e\y\m\9\5\u\h\x\d\c\i\u\e\w\c\w\s\u\z\o\0\g\i\h\7\c\k\r\p\x\5\b\8\5\b\i\q\x\g\v\v\f\2\y\j\i\v\v\h\a\i\7\p\b\w\q\6\s\d\f\f\8\v\l\2\m\c\e\g\m\a\n\3\8\e\v\6\8\p\b\u\w\k\t\1\6\r\v\y\s\e\7\b\j\4\v\7\u\m\m\o\m\n\x\l\w\x\l\e\x\m\x\j\v\e\a\y\o\z\4\0\w\z\b\d\h\0\b\7\6\4\b\p\i\9\b\c\0\0\v\k\d\2\d\j\h\j\z\v\c\y\r\6\y\k\1\a\1\g\0\e\0\k\q\5\q\8\6\z\o\a\w\1\m\h\n\d\1\y\d\2\7\a\j\w\m\0\q\0\7\f\r\7\x\u\i\4\6\w\u\y\l\6\3\c\z\2\f\3\0\i\1\c\m\s\j\r\0\g\m\l\k\l\v\v\0\b\e\w\e\m\5\a\a\3\u\7\s\3\r\h\5\6\5\y\6\t\m\x\p\w\8\u\d\x\t\8\c\g\n\w\b\k\2\z\0\0\b\q\m\v\t\h\a\m\9\u\b\i\n\k\l\g\k\j\c\t\k\k\o\y\a\r\h\1\1\o\j\9\t\y\9\q\2\d\b\w\h\6\y\k\l\r\0\j\m\e\8\q\a\i\p\4\d\i\i\y\p\w\f\k\a\7\b\b\s\s\s\v\9\d\y\z\i\i\w\d\y\4\7\m\s\x\l\o\9\7\s\r\n\9\x\w\p\u\1\9\q\p\5\f\e\l\0\s\4\m\o\x\f\8\6\3\f\b\f\7\8\a\w\l\8\p\r\r\3\2\8\e\7\3\5\z\8\h\0\1\1\d\8\o\s\4\a\r\6\m\t\w\b\v\v\d\2\f\9\d\3\f\y\x\l\k\j\w ]] 00:06:11.187 10:44:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:11.187 10:44:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:11.187 10:44:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:11.187 10:44:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:11.187 10:44:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:11.187 10:44:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:11.446 [2024-07-25 10:44:40.958937] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:11.446 [2024-07-25 10:44:40.959094] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62394 ] 00:06:11.446 [2024-07-25 10:44:41.100619] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.704 [2024-07-25 10:44:41.248970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.704 [2024-07-25 10:44:41.322151] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:11.961  Copying: 512/512 [B] (average 500 kBps) 00:06:11.961 00:06:11.962 10:44:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ wo2yqmx36y0vg4h4m30alg8bh1ahdzisjvv8946cz0eknwnh9ksbgxnbvybt9yhutimfzrvi05ezzchg9rv012qpiblggdxkdwr81y4hfnklfylcqzpyrgjpz4yon82d6xfgw7ftqwarnnrya258kfs4u5sry9gqvvdnqat0l3f7172db6nap6oi2cd4ixt5pb54pslexu0t2ee574ccnkpesibzzr8d9km3bqtyg5v4xzgi7gh8g6gbj5vzyv8y85zw4ex6pv729qzjv3juusjpuhi0xbpng58rq72gqp0if62klea9iadp4nu2sqh0859bp44sekeqwpp28dp6a3ezhavmehubgkwrtbwcuhgq4p5a7mhu0nzgiukcf1q94s3ll8917vu0xwg75559jd0c7cbnqcd4lvhh99ctzf5h0e9wha9v917zbluhrsy73oinftqur5ic8lbftus34oqfv2720ycrzqldwsem99t3lgor8kxoypls8v52t0ku == \w\o\2\y\q\m\x\3\6\y\0\v\g\4\h\4\m\3\0\a\l\g\8\b\h\1\a\h\d\z\i\s\j\v\v\8\9\4\6\c\z\0\e\k\n\w\n\h\9\k\s\b\g\x\n\b\v\y\b\t\9\y\h\u\t\i\m\f\z\r\v\i\0\5\e\z\z\c\h\g\9\r\v\0\1\2\q\p\i\b\l\g\g\d\x\k\d\w\r\8\1\y\4\h\f\n\k\l\f\y\l\c\q\z\p\y\r\g\j\p\z\4\y\o\n\8\2\d\6\x\f\g\w\7\f\t\q\w\a\r\n\n\r\y\a\2\5\8\k\f\s\4\u\5\s\r\y\9\g\q\v\v\d\n\q\a\t\0\l\3\f\7\1\7\2\d\b\6\n\a\p\6\o\i\2\c\d\4\i\x\t\5\p\b\5\4\p\s\l\e\x\u\0\t\2\e\e\5\7\4\c\c\n\k\p\e\s\i\b\z\z\r\8\d\9\k\m\3\b\q\t\y\g\5\v\4\x\z\g\i\7\g\h\8\g\6\g\b\j\5\v\z\y\v\8\y\8\5\z\w\4\e\x\6\p\v\7\2\9\q\z\j\v\3\j\u\u\s\j\p\u\h\i\0\x\b\p\n\g\5\8\r\q\7\2\g\q\p\0\i\f\6\2\k\l\e\a\9\i\a\d\p\4\n\u\2\s\q\h\0\8\5\9\b\p\4\4\s\e\k\e\q\w\p\p\2\8\d\p\6\a\3\e\z\h\a\v\m\e\h\u\b\g\k\w\r\t\b\w\c\u\h\g\q\4\p\5\a\7\m\h\u\0\n\z\g\i\u\k\c\f\1\q\9\4\s\3\l\l\8\9\1\7\v\u\0\x\w\g\7\5\5\5\9\j\d\0\c\7\c\b\n\q\c\d\4\l\v\h\h\9\9\c\t\z\f\5\h\0\e\9\w\h\a\9\v\9\1\7\z\b\l\u\h\r\s\y\7\3\o\i\n\f\t\q\u\r\5\i\c\8\l\b\f\t\u\s\3\4\o\q\f\v\2\7\2\0\y\c\r\z\q\l\d\w\s\e\m\9\9\t\3\l\g\o\r\8\k\x\o\y\p\l\s\8\v\5\2\t\0\k\u ]] 00:06:11.962 10:44:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:11.962 10:44:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:12.220 [2024-07-25 10:44:41.713779] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:12.220 [2024-07-25 10:44:41.713899] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62403 ] 00:06:12.220 [2024-07-25 10:44:41.844195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.479 [2024-07-25 10:44:41.990818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.479 [2024-07-25 10:44:42.062965] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:12.738  Copying: 512/512 [B] (average 500 kBps) 00:06:12.738 00:06:12.738 10:44:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ wo2yqmx36y0vg4h4m30alg8bh1ahdzisjvv8946cz0eknwnh9ksbgxnbvybt9yhutimfzrvi05ezzchg9rv012qpiblggdxkdwr81y4hfnklfylcqzpyrgjpz4yon82d6xfgw7ftqwarnnrya258kfs4u5sry9gqvvdnqat0l3f7172db6nap6oi2cd4ixt5pb54pslexu0t2ee574ccnkpesibzzr8d9km3bqtyg5v4xzgi7gh8g6gbj5vzyv8y85zw4ex6pv729qzjv3juusjpuhi0xbpng58rq72gqp0if62klea9iadp4nu2sqh0859bp44sekeqwpp28dp6a3ezhavmehubgkwrtbwcuhgq4p5a7mhu0nzgiukcf1q94s3ll8917vu0xwg75559jd0c7cbnqcd4lvhh99ctzf5h0e9wha9v917zbluhrsy73oinftqur5ic8lbftus34oqfv2720ycrzqldwsem99t3lgor8kxoypls8v52t0ku == \w\o\2\y\q\m\x\3\6\y\0\v\g\4\h\4\m\3\0\a\l\g\8\b\h\1\a\h\d\z\i\s\j\v\v\8\9\4\6\c\z\0\e\k\n\w\n\h\9\k\s\b\g\x\n\b\v\y\b\t\9\y\h\u\t\i\m\f\z\r\v\i\0\5\e\z\z\c\h\g\9\r\v\0\1\2\q\p\i\b\l\g\g\d\x\k\d\w\r\8\1\y\4\h\f\n\k\l\f\y\l\c\q\z\p\y\r\g\j\p\z\4\y\o\n\8\2\d\6\x\f\g\w\7\f\t\q\w\a\r\n\n\r\y\a\2\5\8\k\f\s\4\u\5\s\r\y\9\g\q\v\v\d\n\q\a\t\0\l\3\f\7\1\7\2\d\b\6\n\a\p\6\o\i\2\c\d\4\i\x\t\5\p\b\5\4\p\s\l\e\x\u\0\t\2\e\e\5\7\4\c\c\n\k\p\e\s\i\b\z\z\r\8\d\9\k\m\3\b\q\t\y\g\5\v\4\x\z\g\i\7\g\h\8\g\6\g\b\j\5\v\z\y\v\8\y\8\5\z\w\4\e\x\6\p\v\7\2\9\q\z\j\v\3\j\u\u\s\j\p\u\h\i\0\x\b\p\n\g\5\8\r\q\7\2\g\q\p\0\i\f\6\2\k\l\e\a\9\i\a\d\p\4\n\u\2\s\q\h\0\8\5\9\b\p\4\4\s\e\k\e\q\w\p\p\2\8\d\p\6\a\3\e\z\h\a\v\m\e\h\u\b\g\k\w\r\t\b\w\c\u\h\g\q\4\p\5\a\7\m\h\u\0\n\z\g\i\u\k\c\f\1\q\9\4\s\3\l\l\8\9\1\7\v\u\0\x\w\g\7\5\5\5\9\j\d\0\c\7\c\b\n\q\c\d\4\l\v\h\h\9\9\c\t\z\f\5\h\0\e\9\w\h\a\9\v\9\1\7\z\b\l\u\h\r\s\y\7\3\o\i\n\f\t\q\u\r\5\i\c\8\l\b\f\t\u\s\3\4\o\q\f\v\2\7\2\0\y\c\r\z\q\l\d\w\s\e\m\9\9\t\3\l\g\o\r\8\k\x\o\y\p\l\s\8\v\5\2\t\0\k\u ]] 00:06:12.738 10:44:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:12.738 10:44:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:12.738 [2024-07-25 10:44:42.444376] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:12.738 [2024-07-25 10:44:42.444466] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62413 ] 00:06:12.997 [2024-07-25 10:44:42.578872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.997 [2024-07-25 10:44:42.712649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.255 [2024-07-25 10:44:42.788836] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:13.513  Copying: 512/512 [B] (average 125 kBps) 00:06:13.513 00:06:13.514 10:44:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ wo2yqmx36y0vg4h4m30alg8bh1ahdzisjvv8946cz0eknwnh9ksbgxnbvybt9yhutimfzrvi05ezzchg9rv012qpiblggdxkdwr81y4hfnklfylcqzpyrgjpz4yon82d6xfgw7ftqwarnnrya258kfs4u5sry9gqvvdnqat0l3f7172db6nap6oi2cd4ixt5pb54pslexu0t2ee574ccnkpesibzzr8d9km3bqtyg5v4xzgi7gh8g6gbj5vzyv8y85zw4ex6pv729qzjv3juusjpuhi0xbpng58rq72gqp0if62klea9iadp4nu2sqh0859bp44sekeqwpp28dp6a3ezhavmehubgkwrtbwcuhgq4p5a7mhu0nzgiukcf1q94s3ll8917vu0xwg75559jd0c7cbnqcd4lvhh99ctzf5h0e9wha9v917zbluhrsy73oinftqur5ic8lbftus34oqfv2720ycrzqldwsem99t3lgor8kxoypls8v52t0ku == \w\o\2\y\q\m\x\3\6\y\0\v\g\4\h\4\m\3\0\a\l\g\8\b\h\1\a\h\d\z\i\s\j\v\v\8\9\4\6\c\z\0\e\k\n\w\n\h\9\k\s\b\g\x\n\b\v\y\b\t\9\y\h\u\t\i\m\f\z\r\v\i\0\5\e\z\z\c\h\g\9\r\v\0\1\2\q\p\i\b\l\g\g\d\x\k\d\w\r\8\1\y\4\h\f\n\k\l\f\y\l\c\q\z\p\y\r\g\j\p\z\4\y\o\n\8\2\d\6\x\f\g\w\7\f\t\q\w\a\r\n\n\r\y\a\2\5\8\k\f\s\4\u\5\s\r\y\9\g\q\v\v\d\n\q\a\t\0\l\3\f\7\1\7\2\d\b\6\n\a\p\6\o\i\2\c\d\4\i\x\t\5\p\b\5\4\p\s\l\e\x\u\0\t\2\e\e\5\7\4\c\c\n\k\p\e\s\i\b\z\z\r\8\d\9\k\m\3\b\q\t\y\g\5\v\4\x\z\g\i\7\g\h\8\g\6\g\b\j\5\v\z\y\v\8\y\8\5\z\w\4\e\x\6\p\v\7\2\9\q\z\j\v\3\j\u\u\s\j\p\u\h\i\0\x\b\p\n\g\5\8\r\q\7\2\g\q\p\0\i\f\6\2\k\l\e\a\9\i\a\d\p\4\n\u\2\s\q\h\0\8\5\9\b\p\4\4\s\e\k\e\q\w\p\p\2\8\d\p\6\a\3\e\z\h\a\v\m\e\h\u\b\g\k\w\r\t\b\w\c\u\h\g\q\4\p\5\a\7\m\h\u\0\n\z\g\i\u\k\c\f\1\q\9\4\s\3\l\l\8\9\1\7\v\u\0\x\w\g\7\5\5\5\9\j\d\0\c\7\c\b\n\q\c\d\4\l\v\h\h\9\9\c\t\z\f\5\h\0\e\9\w\h\a\9\v\9\1\7\z\b\l\u\h\r\s\y\7\3\o\i\n\f\t\q\u\r\5\i\c\8\l\b\f\t\u\s\3\4\o\q\f\v\2\7\2\0\y\c\r\z\q\l\d\w\s\e\m\9\9\t\3\l\g\o\r\8\k\x\o\y\p\l\s\8\v\5\2\t\0\k\u ]] 00:06:13.514 10:44:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:13.514 10:44:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:13.514 [2024-07-25 10:44:43.164069] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:13.514 [2024-07-25 10:44:43.164168] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62428 ] 00:06:13.772 [2024-07-25 10:44:43.297533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.772 [2024-07-25 10:44:43.429927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.772 [2024-07-25 10:44:43.505093] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:14.289  Copying: 512/512 [B] (average 166 kBps) 00:06:14.289 00:06:14.289 10:44:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ wo2yqmx36y0vg4h4m30alg8bh1ahdzisjvv8946cz0eknwnh9ksbgxnbvybt9yhutimfzrvi05ezzchg9rv012qpiblggdxkdwr81y4hfnklfylcqzpyrgjpz4yon82d6xfgw7ftqwarnnrya258kfs4u5sry9gqvvdnqat0l3f7172db6nap6oi2cd4ixt5pb54pslexu0t2ee574ccnkpesibzzr8d9km3bqtyg5v4xzgi7gh8g6gbj5vzyv8y85zw4ex6pv729qzjv3juusjpuhi0xbpng58rq72gqp0if62klea9iadp4nu2sqh0859bp44sekeqwpp28dp6a3ezhavmehubgkwrtbwcuhgq4p5a7mhu0nzgiukcf1q94s3ll8917vu0xwg75559jd0c7cbnqcd4lvhh99ctzf5h0e9wha9v917zbluhrsy73oinftqur5ic8lbftus34oqfv2720ycrzqldwsem99t3lgor8kxoypls8v52t0ku == \w\o\2\y\q\m\x\3\6\y\0\v\g\4\h\4\m\3\0\a\l\g\8\b\h\1\a\h\d\z\i\s\j\v\v\8\9\4\6\c\z\0\e\k\n\w\n\h\9\k\s\b\g\x\n\b\v\y\b\t\9\y\h\u\t\i\m\f\z\r\v\i\0\5\e\z\z\c\h\g\9\r\v\0\1\2\q\p\i\b\l\g\g\d\x\k\d\w\r\8\1\y\4\h\f\n\k\l\f\y\l\c\q\z\p\y\r\g\j\p\z\4\y\o\n\8\2\d\6\x\f\g\w\7\f\t\q\w\a\r\n\n\r\y\a\2\5\8\k\f\s\4\u\5\s\r\y\9\g\q\v\v\d\n\q\a\t\0\l\3\f\7\1\7\2\d\b\6\n\a\p\6\o\i\2\c\d\4\i\x\t\5\p\b\5\4\p\s\l\e\x\u\0\t\2\e\e\5\7\4\c\c\n\k\p\e\s\i\b\z\z\r\8\d\9\k\m\3\b\q\t\y\g\5\v\4\x\z\g\i\7\g\h\8\g\6\g\b\j\5\v\z\y\v\8\y\8\5\z\w\4\e\x\6\p\v\7\2\9\q\z\j\v\3\j\u\u\s\j\p\u\h\i\0\x\b\p\n\g\5\8\r\q\7\2\g\q\p\0\i\f\6\2\k\l\e\a\9\i\a\d\p\4\n\u\2\s\q\h\0\8\5\9\b\p\4\4\s\e\k\e\q\w\p\p\2\8\d\p\6\a\3\e\z\h\a\v\m\e\h\u\b\g\k\w\r\t\b\w\c\u\h\g\q\4\p\5\a\7\m\h\u\0\n\z\g\i\u\k\c\f\1\q\9\4\s\3\l\l\8\9\1\7\v\u\0\x\w\g\7\5\5\5\9\j\d\0\c\7\c\b\n\q\c\d\4\l\v\h\h\9\9\c\t\z\f\5\h\0\e\9\w\h\a\9\v\9\1\7\z\b\l\u\h\r\s\y\7\3\o\i\n\f\t\q\u\r\5\i\c\8\l\b\f\t\u\s\3\4\o\q\f\v\2\7\2\0\y\c\r\z\q\l\d\w\s\e\m\9\9\t\3\l\g\o\r\8\k\x\o\y\p\l\s\8\v\5\2\t\0\k\u ]] 00:06:14.289 00:06:14.289 real 0m5.900s 00:06:14.289 user 0m3.542s 00:06:14.289 sys 0m2.900s 00:06:14.289 10:44:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.289 ************************************ 00:06:14.289 END TEST dd_flags_misc 00:06:14.289 ************************************ 00:06:14.289 10:44:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:14.289 10:44:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:06:14.289 10:44:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:06:14.289 * Second test run, disabling liburing, forcing AIO 00:06:14.289 10:44:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:06:14.289 10:44:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:06:14.289 10:44:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:14.289 10:44:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.289 10:44:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:14.289 ************************************ 00:06:14.289 START TEST dd_flag_append_forced_aio 00:06:14.289 ************************************ 00:06:14.289 10:44:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1125 -- # append 00:06:14.289 10:44:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:06:14.289 10:44:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:06:14.289 10:44:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:06:14.289 10:44:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:14.289 10:44:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:14.289 10:44:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=uzp02lzdr1o3iir75vsfn34tuse9vyey 00:06:14.289 10:44:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:06:14.289 10:44:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:14.289 10:44:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:14.289 10:44:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=o73binbl0n1wisgadtlvuxk8qqc3minm 00:06:14.289 10:44:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s uzp02lzdr1o3iir75vsfn34tuse9vyey 00:06:14.289 10:44:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s o73binbl0n1wisgadtlvuxk8qqc3minm 00:06:14.289 10:44:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:14.289 [2024-07-25 10:44:43.952925] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:14.289 [2024-07-25 10:44:43.953037] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62461 ] 00:06:14.548 [2024-07-25 10:44:44.092531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.548 [2024-07-25 10:44:44.218183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.807 [2024-07-25 10:44:44.291200] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:15.066  Copying: 32/32 [B] (average 31 kBps) 00:06:15.066 00:06:15.066 10:44:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ o73binbl0n1wisgadtlvuxk8qqc3minmuzp02lzdr1o3iir75vsfn34tuse9vyey == \o\7\3\b\i\n\b\l\0\n\1\w\i\s\g\a\d\t\l\v\u\x\k\8\q\q\c\3\m\i\n\m\u\z\p\0\2\l\z\d\r\1\o\3\i\i\r\7\5\v\s\f\n\3\4\t\u\s\e\9\v\y\e\y ]] 00:06:15.066 00:06:15.066 real 0m0.747s 00:06:15.066 user 0m0.437s 00:06:15.066 sys 0m0.186s 00:06:15.066 10:44:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.066 ************************************ 00:06:15.066 END TEST dd_flag_append_forced_aio 00:06:15.066 10:44:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:15.066 ************************************ 00:06:15.066 10:44:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:06:15.066 10:44:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.066 10:44:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.066 10:44:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:15.066 ************************************ 00:06:15.066 START TEST dd_flag_directory_forced_aio 00:06:15.066 ************************************ 00:06:15.066 10:44:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1125 -- # directory 00:06:15.066 10:44:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:15.066 10:44:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:15.066 10:44:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:15.066 10:44:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.066 10:44:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.066 10:44:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.066 10:44:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.066 10:44:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.066 10:44:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.066 10:44:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.066 10:44:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:15.066 10:44:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:15.066 [2024-07-25 10:44:44.749923] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:15.066 [2024-07-25 10:44:44.750025] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62488 ] 00:06:15.325 [2024-07-25 10:44:44.888388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.325 [2024-07-25 10:44:45.030354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.584 [2024-07-25 10:44:45.102504] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:15.584 [2024-07-25 10:44:45.144584] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:15.584 [2024-07-25 10:44:45.144641] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:15.584 [2024-07-25 10:44:45.144660] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:15.584 [2024-07-25 10:44:45.307422] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:15.844 10:44:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:06:15.844 10:44:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:15.844 10:44:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:06:15.844 10:44:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:15.844 10:44:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:15.844 10:44:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:15.844 10:44:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:15.844 10:44:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:15.844 10:44:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:15.844 10:44:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.844 10:44:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.844 10:44:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.844 10:44:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.844 10:44:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.844 10:44:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.844 10:44:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.844 10:44:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:15.844 10:44:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:15.844 [2024-07-25 10:44:45.474094] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:15.844 [2024-07-25 10:44:45.474165] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62498 ] 00:06:16.103 [2024-07-25 10:44:45.606000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.103 [2024-07-25 10:44:45.741851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.103 [2024-07-25 10:44:45.817416] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:16.363 [2024-07-25 10:44:45.859678] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:16.363 [2024-07-25 10:44:45.859736] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:16.363 [2024-07-25 10:44:45.859768] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:16.363 [2024-07-25 10:44:46.023114] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:16.622 10:44:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:06:16.622 10:44:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:16.622 10:44:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:06:16.622 10:44:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:16.622 10:44:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:16.622 10:44:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:16.622 00:06:16.622 real 0m1.455s 00:06:16.622 user 0m0.872s 00:06:16.622 sys 0m0.372s 00:06:16.622 10:44:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.622 10:44:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:16.622 ************************************ 00:06:16.622 END TEST dd_flag_directory_forced_aio 00:06:16.622 ************************************ 00:06:16.622 10:44:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:06:16.622 10:44:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:16.622 10:44:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.622 10:44:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:16.622 ************************************ 00:06:16.622 START TEST dd_flag_nofollow_forced_aio 00:06:16.622 ************************************ 00:06:16.622 10:44:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1125 -- # nofollow 00:06:16.622 10:44:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:16.622 10:44:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:16.622 10:44:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:16.622 10:44:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:16.622 10:44:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:16.622 10:44:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:16.622 10:44:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:16.622 10:44:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.622 10:44:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.622 10:44:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.622 10:44:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.622 10:44:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.622 10:44:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.622 10:44:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.622 10:44:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:16.622 10:44:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:16.622 [2024-07-25 10:44:46.259720] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:16.622 [2024-07-25 10:44:46.259797] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62532 ] 00:06:16.881 [2024-07-25 10:44:46.395612] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.881 [2024-07-25 10:44:46.505270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.881 [2024-07-25 10:44:46.580902] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:17.140 [2024-07-25 10:44:46.628013] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:17.140 [2024-07-25 10:44:46.628062] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:17.140 [2024-07-25 10:44:46.628077] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:17.140 [2024-07-25 10:44:46.796792] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:17.399 10:44:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:06:17.399 10:44:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:17.399 10:44:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:06:17.399 10:44:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:17.399 10:44:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:17.399 10:44:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:17.399 10:44:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:17.399 10:44:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:17.399 10:44:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:17.399 10:44:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:17.399 10:44:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.399 10:44:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:17.399 10:44:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.399 10:44:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:17.399 10:44:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.399 10:44:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:17.399 10:44:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:17.399 10:44:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:17.399 [2024-07-25 10:44:46.984336] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:17.399 [2024-07-25 10:44:46.984438] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62541 ] 00:06:17.399 [2024-07-25 10:44:47.118220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.658 [2024-07-25 10:44:47.234724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.658 [2024-07-25 10:44:47.309645] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:17.658 [2024-07-25 10:44:47.355322] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:17.658 [2024-07-25 10:44:47.355394] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:17.658 [2024-07-25 10:44:47.355426] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:17.916 [2024-07-25 10:44:47.517813] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:17.916 10:44:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:06:17.916 10:44:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:17.916 10:44:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:06:17.916 10:44:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:17.916 10:44:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:17.916 10:44:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:17.916 10:44:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:06:17.916 10:44:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:17.916 10:44:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:17.916 10:44:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:18.175 [2024-07-25 10:44:47.689710] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:18.175 [2024-07-25 10:44:47.689809] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62549 ] 00:06:18.175 [2024-07-25 10:44:47.828397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.433 [2024-07-25 10:44:47.962659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.433 [2024-07-25 10:44:48.041802] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:18.692  Copying: 512/512 [B] (average 500 kBps) 00:06:18.692 00:06:18.692 10:44:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ 4pta0q8j4owc562s4f8f2po5mmo4yg1t3qlftxjgeb6gvan7i9e0u7mdy9fl8p481khd22plkkyrif8tx8ntn6t37irfjay5kzs8240ul7x57pnzdpp5irsc5dtjx7f7siwa4uachlcaxauqbedivuaz519clt9ryibmr7i0wz597sokq9obmce0xyeyaxrwlrf7e0rmxt9ht0jn45m595scn8u7fkbtlx58n8cbw1wtl0s3e8rxp96shdfxhpnn50unalcr1edmqfelj86zog84cuko5ms6xzvr8pp2xtxrhytutpirlyt7cmrydzs1lot5m3psubo9cwd72trwda6b3pqz2whdwg9t01p6oebbqc82nbqndxiul45cej07w9pafbbpupn4mg9juobr0jyuwpeh520c4t77vy9ank7d3zl8qfiejg5yaowuoupdjda5by8osybxbaisik7v6kiegn4eis412wnl8d402yrl7fy82t356w1cby6ermy9 == \4\p\t\a\0\q\8\j\4\o\w\c\5\6\2\s\4\f\8\f\2\p\o\5\m\m\o\4\y\g\1\t\3\q\l\f\t\x\j\g\e\b\6\g\v\a\n\7\i\9\e\0\u\7\m\d\y\9\f\l\8\p\4\8\1\k\h\d\2\2\p\l\k\k\y\r\i\f\8\t\x\8\n\t\n\6\t\3\7\i\r\f\j\a\y\5\k\z\s\8\2\4\0\u\l\7\x\5\7\p\n\z\d\p\p\5\i\r\s\c\5\d\t\j\x\7\f\7\s\i\w\a\4\u\a\c\h\l\c\a\x\a\u\q\b\e\d\i\v\u\a\z\5\1\9\c\l\t\9\r\y\i\b\m\r\7\i\0\w\z\5\9\7\s\o\k\q\9\o\b\m\c\e\0\x\y\e\y\a\x\r\w\l\r\f\7\e\0\r\m\x\t\9\h\t\0\j\n\4\5\m\5\9\5\s\c\n\8\u\7\f\k\b\t\l\x\5\8\n\8\c\b\w\1\w\t\l\0\s\3\e\8\r\x\p\9\6\s\h\d\f\x\h\p\n\n\5\0\u\n\a\l\c\r\1\e\d\m\q\f\e\l\j\8\6\z\o\g\8\4\c\u\k\o\5\m\s\6\x\z\v\r\8\p\p\2\x\t\x\r\h\y\t\u\t\p\i\r\l\y\t\7\c\m\r\y\d\z\s\1\l\o\t\5\m\3\p\s\u\b\o\9\c\w\d\7\2\t\r\w\d\a\6\b\3\p\q\z\2\w\h\d\w\g\9\t\0\1\p\6\o\e\b\b\q\c\8\2\n\b\q\n\d\x\i\u\l\4\5\c\e\j\0\7\w\9\p\a\f\b\b\p\u\p\n\4\m\g\9\j\u\o\b\r\0\j\y\u\w\p\e\h\5\2\0\c\4\t\7\7\v\y\9\a\n\k\7\d\3\z\l\8\q\f\i\e\j\g\5\y\a\o\w\u\o\u\p\d\j\d\a\5\b\y\8\o\s\y\b\x\b\a\i\s\i\k\7\v\6\k\i\e\g\n\4\e\i\s\4\1\2\w\n\l\8\d\4\0\2\y\r\l\7\f\y\8\2\t\3\5\6\w\1\c\b\y\6\e\r\m\y\9 ]] 00:06:18.692 00:06:18.692 real 0m2.183s 00:06:18.692 user 0m1.282s 00:06:18.692 sys 0m0.568s 00:06:18.692 10:44:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.692 10:44:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:18.692 ************************************ 00:06:18.692 END TEST dd_flag_nofollow_forced_aio 00:06:18.692 ************************************ 00:06:18.951 10:44:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:06:18.951 10:44:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.951 10:44:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.951 10:44:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:18.951 ************************************ 00:06:18.951 START TEST dd_flag_noatime_forced_aio 00:06:18.951 ************************************ 00:06:18.951 10:44:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1125 -- # noatime 00:06:18.951 10:44:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:06:18.951 10:44:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:06:18.951 10:44:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:06:18.951 10:44:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:18.951 10:44:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:18.951 10:44:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:18.951 10:44:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1721904288 00:06:18.951 10:44:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:18.951 10:44:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1721904288 00:06:18.951 10:44:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:06:19.959 10:44:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:19.959 [2024-07-25 10:44:49.529738] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:19.959 [2024-07-25 10:44:49.529847] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62595 ] 00:06:19.959 [2024-07-25 10:44:49.670663] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.217 [2024-07-25 10:44:49.821472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.217 [2024-07-25 10:44:49.898819] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:20.785  Copying: 512/512 [B] (average 500 kBps) 00:06:20.785 00:06:20.785 10:44:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:20.785 10:44:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1721904288 )) 00:06:20.785 10:44:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:20.785 10:44:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1721904288 )) 00:06:20.785 10:44:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:20.785 [2024-07-25 10:44:50.311506] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:20.785 [2024-07-25 10:44:50.311616] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62612 ] 00:06:20.785 [2024-07-25 10:44:50.450501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.044 [2024-07-25 10:44:50.585456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.044 [2024-07-25 10:44:50.659936] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:21.302  Copying: 512/512 [B] (average 500 kBps) 00:06:21.302 00:06:21.302 10:44:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:21.302 10:44:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1721904290 )) 00:06:21.302 00:06:21.302 real 0m2.590s 00:06:21.302 user 0m0.924s 00:06:21.302 sys 0m0.411s 00:06:21.302 10:44:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.302 10:44:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:21.302 ************************************ 00:06:21.302 END TEST dd_flag_noatime_forced_aio 00:06:21.302 ************************************ 00:06:21.562 10:44:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:06:21.562 10:44:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.562 10:44:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.562 10:44:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:21.562 ************************************ 00:06:21.562 START TEST dd_flags_misc_forced_aio 00:06:21.562 ************************************ 00:06:21.562 10:44:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1125 -- # io 00:06:21.562 10:44:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:21.562 10:44:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:21.562 10:44:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:21.562 10:44:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:21.562 10:44:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:21.562 10:44:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:21.562 10:44:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:21.562 10:44:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:21.562 10:44:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:21.562 [2024-07-25 10:44:51.148029] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:21.562 [2024-07-25 10:44:51.148115] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62644 ] 00:06:21.562 [2024-07-25 10:44:51.281623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.821 [2024-07-25 10:44:51.420780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.821 [2024-07-25 10:44:51.495269] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:22.389  Copying: 512/512 [B] (average 500 kBps) 00:06:22.389 00:06:22.389 10:44:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ h1sskhwngwd59lspix7hsala8et45uy57ueb1hfccpheyu0tq37pjdofrfbj69gi29p1migsulwdk5awxzz60q92vmuutf30m4qgy7ksqdsij50hhlevaz9csj1camlk365qwfth1n591uessazeyscj10nt4fluozwo6cftx112xggysbsodw7l2d4a02jafv51sevyah778nt3tls4u8evv7y59myur31l9edzg0s7uksv1ueut546vvhlzdgs0e08fj2j3u7f89ivvnc0qh2ttfritn3lgblrcmawy5pam1e1uvj8hai6l0gziyjebz28kicmqvslz4btm565nqc6x05oqgfpaf13jhinqd1cgufzjfuwvnx4lj6pmlb1u5wtm7z1ao8fulhrywwch8906elakcjn90jru0omysb0nm8gnorowq1k7do2s64ls1ltfx6gamsslct52er99d4crkj1r4caj0248ft7nlkwdqrl199qu6iv7p9edhq0 == \h\1\s\s\k\h\w\n\g\w\d\5\9\l\s\p\i\x\7\h\s\a\l\a\8\e\t\4\5\u\y\5\7\u\e\b\1\h\f\c\c\p\h\e\y\u\0\t\q\3\7\p\j\d\o\f\r\f\b\j\6\9\g\i\2\9\p\1\m\i\g\s\u\l\w\d\k\5\a\w\x\z\z\6\0\q\9\2\v\m\u\u\t\f\3\0\m\4\q\g\y\7\k\s\q\d\s\i\j\5\0\h\h\l\e\v\a\z\9\c\s\j\1\c\a\m\l\k\3\6\5\q\w\f\t\h\1\n\5\9\1\u\e\s\s\a\z\e\y\s\c\j\1\0\n\t\4\f\l\u\o\z\w\o\6\c\f\t\x\1\1\2\x\g\g\y\s\b\s\o\d\w\7\l\2\d\4\a\0\2\j\a\f\v\5\1\s\e\v\y\a\h\7\7\8\n\t\3\t\l\s\4\u\8\e\v\v\7\y\5\9\m\y\u\r\3\1\l\9\e\d\z\g\0\s\7\u\k\s\v\1\u\e\u\t\5\4\6\v\v\h\l\z\d\g\s\0\e\0\8\f\j\2\j\3\u\7\f\8\9\i\v\v\n\c\0\q\h\2\t\t\f\r\i\t\n\3\l\g\b\l\r\c\m\a\w\y\5\p\a\m\1\e\1\u\v\j\8\h\a\i\6\l\0\g\z\i\y\j\e\b\z\2\8\k\i\c\m\q\v\s\l\z\4\b\t\m\5\6\5\n\q\c\6\x\0\5\o\q\g\f\p\a\f\1\3\j\h\i\n\q\d\1\c\g\u\f\z\j\f\u\w\v\n\x\4\l\j\6\p\m\l\b\1\u\5\w\t\m\7\z\1\a\o\8\f\u\l\h\r\y\w\w\c\h\8\9\0\6\e\l\a\k\c\j\n\9\0\j\r\u\0\o\m\y\s\b\0\n\m\8\g\n\o\r\o\w\q\1\k\7\d\o\2\s\6\4\l\s\1\l\t\f\x\6\g\a\m\s\s\l\c\t\5\2\e\r\9\9\d\4\c\r\k\j\1\r\4\c\a\j\0\2\4\8\f\t\7\n\l\k\w\d\q\r\l\1\9\9\q\u\6\i\v\7\p\9\e\d\h\q\0 ]] 00:06:22.389 10:44:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:22.389 10:44:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:22.389 [2024-07-25 10:44:51.887050] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:22.389 [2024-07-25 10:44:51.887157] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62646 ] 00:06:22.389 [2024-07-25 10:44:52.024983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.648 [2024-07-25 10:44:52.158648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.648 [2024-07-25 10:44:52.232346] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:22.907  Copying: 512/512 [B] (average 500 kBps) 00:06:22.907 00:06:22.907 10:44:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ h1sskhwngwd59lspix7hsala8et45uy57ueb1hfccpheyu0tq37pjdofrfbj69gi29p1migsulwdk5awxzz60q92vmuutf30m4qgy7ksqdsij50hhlevaz9csj1camlk365qwfth1n591uessazeyscj10nt4fluozwo6cftx112xggysbsodw7l2d4a02jafv51sevyah778nt3tls4u8evv7y59myur31l9edzg0s7uksv1ueut546vvhlzdgs0e08fj2j3u7f89ivvnc0qh2ttfritn3lgblrcmawy5pam1e1uvj8hai6l0gziyjebz28kicmqvslz4btm565nqc6x05oqgfpaf13jhinqd1cgufzjfuwvnx4lj6pmlb1u5wtm7z1ao8fulhrywwch8906elakcjn90jru0omysb0nm8gnorowq1k7do2s64ls1ltfx6gamsslct52er99d4crkj1r4caj0248ft7nlkwdqrl199qu6iv7p9edhq0 == \h\1\s\s\k\h\w\n\g\w\d\5\9\l\s\p\i\x\7\h\s\a\l\a\8\e\t\4\5\u\y\5\7\u\e\b\1\h\f\c\c\p\h\e\y\u\0\t\q\3\7\p\j\d\o\f\r\f\b\j\6\9\g\i\2\9\p\1\m\i\g\s\u\l\w\d\k\5\a\w\x\z\z\6\0\q\9\2\v\m\u\u\t\f\3\0\m\4\q\g\y\7\k\s\q\d\s\i\j\5\0\h\h\l\e\v\a\z\9\c\s\j\1\c\a\m\l\k\3\6\5\q\w\f\t\h\1\n\5\9\1\u\e\s\s\a\z\e\y\s\c\j\1\0\n\t\4\f\l\u\o\z\w\o\6\c\f\t\x\1\1\2\x\g\g\y\s\b\s\o\d\w\7\l\2\d\4\a\0\2\j\a\f\v\5\1\s\e\v\y\a\h\7\7\8\n\t\3\t\l\s\4\u\8\e\v\v\7\y\5\9\m\y\u\r\3\1\l\9\e\d\z\g\0\s\7\u\k\s\v\1\u\e\u\t\5\4\6\v\v\h\l\z\d\g\s\0\e\0\8\f\j\2\j\3\u\7\f\8\9\i\v\v\n\c\0\q\h\2\t\t\f\r\i\t\n\3\l\g\b\l\r\c\m\a\w\y\5\p\a\m\1\e\1\u\v\j\8\h\a\i\6\l\0\g\z\i\y\j\e\b\z\2\8\k\i\c\m\q\v\s\l\z\4\b\t\m\5\6\5\n\q\c\6\x\0\5\o\q\g\f\p\a\f\1\3\j\h\i\n\q\d\1\c\g\u\f\z\j\f\u\w\v\n\x\4\l\j\6\p\m\l\b\1\u\5\w\t\m\7\z\1\a\o\8\f\u\l\h\r\y\w\w\c\h\8\9\0\6\e\l\a\k\c\j\n\9\0\j\r\u\0\o\m\y\s\b\0\n\m\8\g\n\o\r\o\w\q\1\k\7\d\o\2\s\6\4\l\s\1\l\t\f\x\6\g\a\m\s\s\l\c\t\5\2\e\r\9\9\d\4\c\r\k\j\1\r\4\c\a\j\0\2\4\8\f\t\7\n\l\k\w\d\q\r\l\1\9\9\q\u\6\i\v\7\p\9\e\d\h\q\0 ]] 00:06:22.907 10:44:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:22.907 10:44:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:23.166 [2024-07-25 10:44:52.696223] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:23.166 [2024-07-25 10:44:52.696354] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62659 ] 00:06:23.166 [2024-07-25 10:44:52.836285] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.425 [2024-07-25 10:44:52.996426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.425 [2024-07-25 10:44:53.071011] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:23.992  Copying: 512/512 [B] (average 166 kBps) 00:06:23.992 00:06:23.992 10:44:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ h1sskhwngwd59lspix7hsala8et45uy57ueb1hfccpheyu0tq37pjdofrfbj69gi29p1migsulwdk5awxzz60q92vmuutf30m4qgy7ksqdsij50hhlevaz9csj1camlk365qwfth1n591uessazeyscj10nt4fluozwo6cftx112xggysbsodw7l2d4a02jafv51sevyah778nt3tls4u8evv7y59myur31l9edzg0s7uksv1ueut546vvhlzdgs0e08fj2j3u7f89ivvnc0qh2ttfritn3lgblrcmawy5pam1e1uvj8hai6l0gziyjebz28kicmqvslz4btm565nqc6x05oqgfpaf13jhinqd1cgufzjfuwvnx4lj6pmlb1u5wtm7z1ao8fulhrywwch8906elakcjn90jru0omysb0nm8gnorowq1k7do2s64ls1ltfx6gamsslct52er99d4crkj1r4caj0248ft7nlkwdqrl199qu6iv7p9edhq0 == \h\1\s\s\k\h\w\n\g\w\d\5\9\l\s\p\i\x\7\h\s\a\l\a\8\e\t\4\5\u\y\5\7\u\e\b\1\h\f\c\c\p\h\e\y\u\0\t\q\3\7\p\j\d\o\f\r\f\b\j\6\9\g\i\2\9\p\1\m\i\g\s\u\l\w\d\k\5\a\w\x\z\z\6\0\q\9\2\v\m\u\u\t\f\3\0\m\4\q\g\y\7\k\s\q\d\s\i\j\5\0\h\h\l\e\v\a\z\9\c\s\j\1\c\a\m\l\k\3\6\5\q\w\f\t\h\1\n\5\9\1\u\e\s\s\a\z\e\y\s\c\j\1\0\n\t\4\f\l\u\o\z\w\o\6\c\f\t\x\1\1\2\x\g\g\y\s\b\s\o\d\w\7\l\2\d\4\a\0\2\j\a\f\v\5\1\s\e\v\y\a\h\7\7\8\n\t\3\t\l\s\4\u\8\e\v\v\7\y\5\9\m\y\u\r\3\1\l\9\e\d\z\g\0\s\7\u\k\s\v\1\u\e\u\t\5\4\6\v\v\h\l\z\d\g\s\0\e\0\8\f\j\2\j\3\u\7\f\8\9\i\v\v\n\c\0\q\h\2\t\t\f\r\i\t\n\3\l\g\b\l\r\c\m\a\w\y\5\p\a\m\1\e\1\u\v\j\8\h\a\i\6\l\0\g\z\i\y\j\e\b\z\2\8\k\i\c\m\q\v\s\l\z\4\b\t\m\5\6\5\n\q\c\6\x\0\5\o\q\g\f\p\a\f\1\3\j\h\i\n\q\d\1\c\g\u\f\z\j\f\u\w\v\n\x\4\l\j\6\p\m\l\b\1\u\5\w\t\m\7\z\1\a\o\8\f\u\l\h\r\y\w\w\c\h\8\9\0\6\e\l\a\k\c\j\n\9\0\j\r\u\0\o\m\y\s\b\0\n\m\8\g\n\o\r\o\w\q\1\k\7\d\o\2\s\6\4\l\s\1\l\t\f\x\6\g\a\m\s\s\l\c\t\5\2\e\r\9\9\d\4\c\r\k\j\1\r\4\c\a\j\0\2\4\8\f\t\7\n\l\k\w\d\q\r\l\1\9\9\q\u\6\i\v\7\p\9\e\d\h\q\0 ]] 00:06:23.992 10:44:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:23.992 10:44:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:23.992 [2024-07-25 10:44:53.503144] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:23.992 [2024-07-25 10:44:53.503252] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62672 ] 00:06:23.992 [2024-07-25 10:44:53.641840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.250 [2024-07-25 10:44:53.779096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.250 [2024-07-25 10:44:53.853750] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:24.509  Copying: 512/512 [B] (average 166 kBps) 00:06:24.509 00:06:24.509 10:44:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ h1sskhwngwd59lspix7hsala8et45uy57ueb1hfccpheyu0tq37pjdofrfbj69gi29p1migsulwdk5awxzz60q92vmuutf30m4qgy7ksqdsij50hhlevaz9csj1camlk365qwfth1n591uessazeyscj10nt4fluozwo6cftx112xggysbsodw7l2d4a02jafv51sevyah778nt3tls4u8evv7y59myur31l9edzg0s7uksv1ueut546vvhlzdgs0e08fj2j3u7f89ivvnc0qh2ttfritn3lgblrcmawy5pam1e1uvj8hai6l0gziyjebz28kicmqvslz4btm565nqc6x05oqgfpaf13jhinqd1cgufzjfuwvnx4lj6pmlb1u5wtm7z1ao8fulhrywwch8906elakcjn90jru0omysb0nm8gnorowq1k7do2s64ls1ltfx6gamsslct52er99d4crkj1r4caj0248ft7nlkwdqrl199qu6iv7p9edhq0 == \h\1\s\s\k\h\w\n\g\w\d\5\9\l\s\p\i\x\7\h\s\a\l\a\8\e\t\4\5\u\y\5\7\u\e\b\1\h\f\c\c\p\h\e\y\u\0\t\q\3\7\p\j\d\o\f\r\f\b\j\6\9\g\i\2\9\p\1\m\i\g\s\u\l\w\d\k\5\a\w\x\z\z\6\0\q\9\2\v\m\u\u\t\f\3\0\m\4\q\g\y\7\k\s\q\d\s\i\j\5\0\h\h\l\e\v\a\z\9\c\s\j\1\c\a\m\l\k\3\6\5\q\w\f\t\h\1\n\5\9\1\u\e\s\s\a\z\e\y\s\c\j\1\0\n\t\4\f\l\u\o\z\w\o\6\c\f\t\x\1\1\2\x\g\g\y\s\b\s\o\d\w\7\l\2\d\4\a\0\2\j\a\f\v\5\1\s\e\v\y\a\h\7\7\8\n\t\3\t\l\s\4\u\8\e\v\v\7\y\5\9\m\y\u\r\3\1\l\9\e\d\z\g\0\s\7\u\k\s\v\1\u\e\u\t\5\4\6\v\v\h\l\z\d\g\s\0\e\0\8\f\j\2\j\3\u\7\f\8\9\i\v\v\n\c\0\q\h\2\t\t\f\r\i\t\n\3\l\g\b\l\r\c\m\a\w\y\5\p\a\m\1\e\1\u\v\j\8\h\a\i\6\l\0\g\z\i\y\j\e\b\z\2\8\k\i\c\m\q\v\s\l\z\4\b\t\m\5\6\5\n\q\c\6\x\0\5\o\q\g\f\p\a\f\1\3\j\h\i\n\q\d\1\c\g\u\f\z\j\f\u\w\v\n\x\4\l\j\6\p\m\l\b\1\u\5\w\t\m\7\z\1\a\o\8\f\u\l\h\r\y\w\w\c\h\8\9\0\6\e\l\a\k\c\j\n\9\0\j\r\u\0\o\m\y\s\b\0\n\m\8\g\n\o\r\o\w\q\1\k\7\d\o\2\s\6\4\l\s\1\l\t\f\x\6\g\a\m\s\s\l\c\t\5\2\e\r\9\9\d\4\c\r\k\j\1\r\4\c\a\j\0\2\4\8\f\t\7\n\l\k\w\d\q\r\l\1\9\9\q\u\6\i\v\7\p\9\e\d\h\q\0 ]] 00:06:24.509 10:44:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:24.509 10:44:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:24.509 10:44:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:24.509 10:44:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:24.509 10:44:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:24.509 10:44:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:24.767 [2024-07-25 10:44:54.298028] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:24.767 [2024-07-25 10:44:54.298146] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62685 ] 00:06:24.767 [2024-07-25 10:44:54.436159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.027 [2024-07-25 10:44:54.561926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.027 [2024-07-25 10:44:54.636113] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:25.286  Copying: 512/512 [B] (average 500 kBps) 00:06:25.286 00:06:25.286 10:44:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ p4bptzz2snvosna9n5ly2d8hncdh5epxjywxae3gvhoxwvkdfhteqozwijmax0c95deoteairsg8ownie1neqzt7k9e9cr1y78gbsaikjmbwhpzunq9muwzm4pv8p9seyegi39e6drh3grl7w5j5s7kh7kvpcyts9m2fasle8bfvpbthtye44grldxrkdcosxt5vr1gvjup2j4otxglblb16okkpt6dz5u2nmbapyjcnox4988g75hhs22recq3umfjnhqpgaqila5ha1ljviy7exuumtbl3dcy9hv2g9q5wjahw8ch3cj4ofvm138esulsfi38fz9qtwdyzteea12gvgf4y0yrfj6iawt2cfbkxzlxuxzkrhgg68dlt1qicws942qxhp5q3l43b43tv3vqduv56oxtmj39qye0a9pl5es6x51z8f8q1j2hdcgsfbgt9tcp32u0bd78dperibllmqlbroihb5e5fscgvbulya2chg1ht681roqxigq4t == \p\4\b\p\t\z\z\2\s\n\v\o\s\n\a\9\n\5\l\y\2\d\8\h\n\c\d\h\5\e\p\x\j\y\w\x\a\e\3\g\v\h\o\x\w\v\k\d\f\h\t\e\q\o\z\w\i\j\m\a\x\0\c\9\5\d\e\o\t\e\a\i\r\s\g\8\o\w\n\i\e\1\n\e\q\z\t\7\k\9\e\9\c\r\1\y\7\8\g\b\s\a\i\k\j\m\b\w\h\p\z\u\n\q\9\m\u\w\z\m\4\p\v\8\p\9\s\e\y\e\g\i\3\9\e\6\d\r\h\3\g\r\l\7\w\5\j\5\s\7\k\h\7\k\v\p\c\y\t\s\9\m\2\f\a\s\l\e\8\b\f\v\p\b\t\h\t\y\e\4\4\g\r\l\d\x\r\k\d\c\o\s\x\t\5\v\r\1\g\v\j\u\p\2\j\4\o\t\x\g\l\b\l\b\1\6\o\k\k\p\t\6\d\z\5\u\2\n\m\b\a\p\y\j\c\n\o\x\4\9\8\8\g\7\5\h\h\s\2\2\r\e\c\q\3\u\m\f\j\n\h\q\p\g\a\q\i\l\a\5\h\a\1\l\j\v\i\y\7\e\x\u\u\m\t\b\l\3\d\c\y\9\h\v\2\g\9\q\5\w\j\a\h\w\8\c\h\3\c\j\4\o\f\v\m\1\3\8\e\s\u\l\s\f\i\3\8\f\z\9\q\t\w\d\y\z\t\e\e\a\1\2\g\v\g\f\4\y\0\y\r\f\j\6\i\a\w\t\2\c\f\b\k\x\z\l\x\u\x\z\k\r\h\g\g\6\8\d\l\t\1\q\i\c\w\s\9\4\2\q\x\h\p\5\q\3\l\4\3\b\4\3\t\v\3\v\q\d\u\v\5\6\o\x\t\m\j\3\9\q\y\e\0\a\9\p\l\5\e\s\6\x\5\1\z\8\f\8\q\1\j\2\h\d\c\g\s\f\b\g\t\9\t\c\p\3\2\u\0\b\d\7\8\d\p\e\r\i\b\l\l\m\q\l\b\r\o\i\h\b\5\e\5\f\s\c\g\v\b\u\l\y\a\2\c\h\g\1\h\t\6\8\1\r\o\q\x\i\g\q\4\t ]] 00:06:25.286 10:44:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:25.286 10:44:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:25.545 [2024-07-25 10:44:55.094830] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:25.545 [2024-07-25 10:44:55.094979] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62687 ] 00:06:25.545 [2024-07-25 10:44:55.235768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.804 [2024-07-25 10:44:55.363600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.804 [2024-07-25 10:44:55.437697] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:26.062  Copying: 512/512 [B] (average 500 kBps) 00:06:26.063 00:06:26.321 10:44:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ p4bptzz2snvosna9n5ly2d8hncdh5epxjywxae3gvhoxwvkdfhteqozwijmax0c95deoteairsg8ownie1neqzt7k9e9cr1y78gbsaikjmbwhpzunq9muwzm4pv8p9seyegi39e6drh3grl7w5j5s7kh7kvpcyts9m2fasle8bfvpbthtye44grldxrkdcosxt5vr1gvjup2j4otxglblb16okkpt6dz5u2nmbapyjcnox4988g75hhs22recq3umfjnhqpgaqila5ha1ljviy7exuumtbl3dcy9hv2g9q5wjahw8ch3cj4ofvm138esulsfi38fz9qtwdyzteea12gvgf4y0yrfj6iawt2cfbkxzlxuxzkrhgg68dlt1qicws942qxhp5q3l43b43tv3vqduv56oxtmj39qye0a9pl5es6x51z8f8q1j2hdcgsfbgt9tcp32u0bd78dperibllmqlbroihb5e5fscgvbulya2chg1ht681roqxigq4t == \p\4\b\p\t\z\z\2\s\n\v\o\s\n\a\9\n\5\l\y\2\d\8\h\n\c\d\h\5\e\p\x\j\y\w\x\a\e\3\g\v\h\o\x\w\v\k\d\f\h\t\e\q\o\z\w\i\j\m\a\x\0\c\9\5\d\e\o\t\e\a\i\r\s\g\8\o\w\n\i\e\1\n\e\q\z\t\7\k\9\e\9\c\r\1\y\7\8\g\b\s\a\i\k\j\m\b\w\h\p\z\u\n\q\9\m\u\w\z\m\4\p\v\8\p\9\s\e\y\e\g\i\3\9\e\6\d\r\h\3\g\r\l\7\w\5\j\5\s\7\k\h\7\k\v\p\c\y\t\s\9\m\2\f\a\s\l\e\8\b\f\v\p\b\t\h\t\y\e\4\4\g\r\l\d\x\r\k\d\c\o\s\x\t\5\v\r\1\g\v\j\u\p\2\j\4\o\t\x\g\l\b\l\b\1\6\o\k\k\p\t\6\d\z\5\u\2\n\m\b\a\p\y\j\c\n\o\x\4\9\8\8\g\7\5\h\h\s\2\2\r\e\c\q\3\u\m\f\j\n\h\q\p\g\a\q\i\l\a\5\h\a\1\l\j\v\i\y\7\e\x\u\u\m\t\b\l\3\d\c\y\9\h\v\2\g\9\q\5\w\j\a\h\w\8\c\h\3\c\j\4\o\f\v\m\1\3\8\e\s\u\l\s\f\i\3\8\f\z\9\q\t\w\d\y\z\t\e\e\a\1\2\g\v\g\f\4\y\0\y\r\f\j\6\i\a\w\t\2\c\f\b\k\x\z\l\x\u\x\z\k\r\h\g\g\6\8\d\l\t\1\q\i\c\w\s\9\4\2\q\x\h\p\5\q\3\l\4\3\b\4\3\t\v\3\v\q\d\u\v\5\6\o\x\t\m\j\3\9\q\y\e\0\a\9\p\l\5\e\s\6\x\5\1\z\8\f\8\q\1\j\2\h\d\c\g\s\f\b\g\t\9\t\c\p\3\2\u\0\b\d\7\8\d\p\e\r\i\b\l\l\m\q\l\b\r\o\i\h\b\5\e\5\f\s\c\g\v\b\u\l\y\a\2\c\h\g\1\h\t\6\8\1\r\o\q\x\i\g\q\4\t ]] 00:06:26.321 10:44:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:26.321 10:44:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:26.321 [2024-07-25 10:44:55.858509] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:26.321 [2024-07-25 10:44:55.858621] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62700 ] 00:06:26.321 [2024-07-25 10:44:55.994246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.580 [2024-07-25 10:44:56.144222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.580 [2024-07-25 10:44:56.222282] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:27.146  Copying: 512/512 [B] (average 500 kBps) 00:06:27.146 00:06:27.147 10:44:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ p4bptzz2snvosna9n5ly2d8hncdh5epxjywxae3gvhoxwvkdfhteqozwijmax0c95deoteairsg8ownie1neqzt7k9e9cr1y78gbsaikjmbwhpzunq9muwzm4pv8p9seyegi39e6drh3grl7w5j5s7kh7kvpcyts9m2fasle8bfvpbthtye44grldxrkdcosxt5vr1gvjup2j4otxglblb16okkpt6dz5u2nmbapyjcnox4988g75hhs22recq3umfjnhqpgaqila5ha1ljviy7exuumtbl3dcy9hv2g9q5wjahw8ch3cj4ofvm138esulsfi38fz9qtwdyzteea12gvgf4y0yrfj6iawt2cfbkxzlxuxzkrhgg68dlt1qicws942qxhp5q3l43b43tv3vqduv56oxtmj39qye0a9pl5es6x51z8f8q1j2hdcgsfbgt9tcp32u0bd78dperibllmqlbroihb5e5fscgvbulya2chg1ht681roqxigq4t == \p\4\b\p\t\z\z\2\s\n\v\o\s\n\a\9\n\5\l\y\2\d\8\h\n\c\d\h\5\e\p\x\j\y\w\x\a\e\3\g\v\h\o\x\w\v\k\d\f\h\t\e\q\o\z\w\i\j\m\a\x\0\c\9\5\d\e\o\t\e\a\i\r\s\g\8\o\w\n\i\e\1\n\e\q\z\t\7\k\9\e\9\c\r\1\y\7\8\g\b\s\a\i\k\j\m\b\w\h\p\z\u\n\q\9\m\u\w\z\m\4\p\v\8\p\9\s\e\y\e\g\i\3\9\e\6\d\r\h\3\g\r\l\7\w\5\j\5\s\7\k\h\7\k\v\p\c\y\t\s\9\m\2\f\a\s\l\e\8\b\f\v\p\b\t\h\t\y\e\4\4\g\r\l\d\x\r\k\d\c\o\s\x\t\5\v\r\1\g\v\j\u\p\2\j\4\o\t\x\g\l\b\l\b\1\6\o\k\k\p\t\6\d\z\5\u\2\n\m\b\a\p\y\j\c\n\o\x\4\9\8\8\g\7\5\h\h\s\2\2\r\e\c\q\3\u\m\f\j\n\h\q\p\g\a\q\i\l\a\5\h\a\1\l\j\v\i\y\7\e\x\u\u\m\t\b\l\3\d\c\y\9\h\v\2\g\9\q\5\w\j\a\h\w\8\c\h\3\c\j\4\o\f\v\m\1\3\8\e\s\u\l\s\f\i\3\8\f\z\9\q\t\w\d\y\z\t\e\e\a\1\2\g\v\g\f\4\y\0\y\r\f\j\6\i\a\w\t\2\c\f\b\k\x\z\l\x\u\x\z\k\r\h\g\g\6\8\d\l\t\1\q\i\c\w\s\9\4\2\q\x\h\p\5\q\3\l\4\3\b\4\3\t\v\3\v\q\d\u\v\5\6\o\x\t\m\j\3\9\q\y\e\0\a\9\p\l\5\e\s\6\x\5\1\z\8\f\8\q\1\j\2\h\d\c\g\s\f\b\g\t\9\t\c\p\3\2\u\0\b\d\7\8\d\p\e\r\i\b\l\l\m\q\l\b\r\o\i\h\b\5\e\5\f\s\c\g\v\b\u\l\y\a\2\c\h\g\1\h\t\6\8\1\r\o\q\x\i\g\q\4\t ]] 00:06:27.147 10:44:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:27.147 10:44:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:27.147 [2024-07-25 10:44:56.647934] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:27.147 [2024-07-25 10:44:56.648041] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62713 ] 00:06:27.147 [2024-07-25 10:44:56.787705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.421 [2024-07-25 10:44:56.917653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.421 [2024-07-25 10:44:56.992502] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:27.680  Copying: 512/512 [B] (average 166 kBps) 00:06:27.680 00:06:27.680 10:44:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ p4bptzz2snvosna9n5ly2d8hncdh5epxjywxae3gvhoxwvkdfhteqozwijmax0c95deoteairsg8ownie1neqzt7k9e9cr1y78gbsaikjmbwhpzunq9muwzm4pv8p9seyegi39e6drh3grl7w5j5s7kh7kvpcyts9m2fasle8bfvpbthtye44grldxrkdcosxt5vr1gvjup2j4otxglblb16okkpt6dz5u2nmbapyjcnox4988g75hhs22recq3umfjnhqpgaqila5ha1ljviy7exuumtbl3dcy9hv2g9q5wjahw8ch3cj4ofvm138esulsfi38fz9qtwdyzteea12gvgf4y0yrfj6iawt2cfbkxzlxuxzkrhgg68dlt1qicws942qxhp5q3l43b43tv3vqduv56oxtmj39qye0a9pl5es6x51z8f8q1j2hdcgsfbgt9tcp32u0bd78dperibllmqlbroihb5e5fscgvbulya2chg1ht681roqxigq4t == \p\4\b\p\t\z\z\2\s\n\v\o\s\n\a\9\n\5\l\y\2\d\8\h\n\c\d\h\5\e\p\x\j\y\w\x\a\e\3\g\v\h\o\x\w\v\k\d\f\h\t\e\q\o\z\w\i\j\m\a\x\0\c\9\5\d\e\o\t\e\a\i\r\s\g\8\o\w\n\i\e\1\n\e\q\z\t\7\k\9\e\9\c\r\1\y\7\8\g\b\s\a\i\k\j\m\b\w\h\p\z\u\n\q\9\m\u\w\z\m\4\p\v\8\p\9\s\e\y\e\g\i\3\9\e\6\d\r\h\3\g\r\l\7\w\5\j\5\s\7\k\h\7\k\v\p\c\y\t\s\9\m\2\f\a\s\l\e\8\b\f\v\p\b\t\h\t\y\e\4\4\g\r\l\d\x\r\k\d\c\o\s\x\t\5\v\r\1\g\v\j\u\p\2\j\4\o\t\x\g\l\b\l\b\1\6\o\k\k\p\t\6\d\z\5\u\2\n\m\b\a\p\y\j\c\n\o\x\4\9\8\8\g\7\5\h\h\s\2\2\r\e\c\q\3\u\m\f\j\n\h\q\p\g\a\q\i\l\a\5\h\a\1\l\j\v\i\y\7\e\x\u\u\m\t\b\l\3\d\c\y\9\h\v\2\g\9\q\5\w\j\a\h\w\8\c\h\3\c\j\4\o\f\v\m\1\3\8\e\s\u\l\s\f\i\3\8\f\z\9\q\t\w\d\y\z\t\e\e\a\1\2\g\v\g\f\4\y\0\y\r\f\j\6\i\a\w\t\2\c\f\b\k\x\z\l\x\u\x\z\k\r\h\g\g\6\8\d\l\t\1\q\i\c\w\s\9\4\2\q\x\h\p\5\q\3\l\4\3\b\4\3\t\v\3\v\q\d\u\v\5\6\o\x\t\m\j\3\9\q\y\e\0\a\9\p\l\5\e\s\6\x\5\1\z\8\f\8\q\1\j\2\h\d\c\g\s\f\b\g\t\9\t\c\p\3\2\u\0\b\d\7\8\d\p\e\r\i\b\l\l\m\q\l\b\r\o\i\h\b\5\e\5\f\s\c\g\v\b\u\l\y\a\2\c\h\g\1\h\t\6\8\1\r\o\q\x\i\g\q\4\t ]] 00:06:27.680 00:06:27.680 real 0m6.282s 00:06:27.680 user 0m3.794s 00:06:27.680 sys 0m1.507s 00:06:27.680 10:44:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.680 10:44:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:27.680 ************************************ 00:06:27.680 END TEST dd_flags_misc_forced_aio 00:06:27.680 ************************************ 00:06:27.680 10:44:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:06:27.940 10:44:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:27.940 10:44:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:27.940 00:06:27.940 real 0m27.041s 00:06:27.940 user 0m14.822s 00:06:27.940 sys 0m8.677s 00:06:27.940 10:44:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.940 10:44:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:27.940 ************************************ 00:06:27.940 END TEST spdk_dd_posix 00:06:27.940 ************************************ 00:06:27.940 10:44:57 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:27.940 10:44:57 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:27.940 10:44:57 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.940 10:44:57 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:27.940 ************************************ 00:06:27.940 START TEST spdk_dd_malloc 00:06:27.940 ************************************ 00:06:27.940 10:44:57 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:27.940 * Looking for test storage... 00:06:27.940 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:27.940 10:44:57 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:27.940 10:44:57 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:27.940 10:44:57 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:27.940 10:44:57 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:27.940 10:44:57 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.940 10:44:57 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.940 10:44:57 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.940 10:44:57 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:06:27.940 10:44:57 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.940 10:44:57 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:06:27.940 10:44:57 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:27.940 10:44:57 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.940 10:44:57 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:27.940 ************************************ 00:06:27.940 START TEST dd_malloc_copy 00:06:27.940 ************************************ 00:06:27.940 10:44:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1125 -- # malloc_copy 00:06:27.940 10:44:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:06:27.940 10:44:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:06:27.940 10:44:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:27.940 10:44:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:06:27.940 10:44:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:06:27.940 10:44:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:06:27.940 10:44:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:06:27.940 10:44:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:06:27.940 10:44:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:27.940 10:44:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:27.940 [2024-07-25 10:44:57.632160] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:27.940 [2024-07-25 10:44:57.632264] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62787 ] 00:06:27.940 { 00:06:27.940 "subsystems": [ 00:06:27.940 { 00:06:27.940 "subsystem": "bdev", 00:06:27.940 "config": [ 00:06:27.940 { 00:06:27.940 "params": { 00:06:27.940 "block_size": 512, 00:06:27.940 "num_blocks": 1048576, 00:06:27.940 "name": "malloc0" 00:06:27.940 }, 00:06:27.940 "method": "bdev_malloc_create" 00:06:27.940 }, 00:06:27.940 { 00:06:27.940 "params": { 00:06:27.940 "block_size": 512, 00:06:27.940 "num_blocks": 1048576, 00:06:27.940 "name": "malloc1" 00:06:27.940 }, 00:06:27.940 "method": "bdev_malloc_create" 00:06:27.940 }, 00:06:27.940 { 00:06:27.940 "method": "bdev_wait_for_examine" 00:06:27.940 } 00:06:27.940 ] 00:06:27.940 } 00:06:27.940 ] 00:06:27.940 } 00:06:28.199 [2024-07-25 10:44:57.769744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.200 [2024-07-25 10:44:57.885233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.459 [2024-07-25 10:44:57.963406] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:32.321  Copying: 208/512 [MB] (208 MBps) Copying: 412/512 [MB] (203 MBps) Copying: 512/512 [MB] (average 206 MBps) 00:06:32.321 00:06:32.321 10:45:01 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:06:32.321 10:45:01 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:06:32.321 10:45:01 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:32.321 10:45:01 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:32.321 [2024-07-25 10:45:01.810509] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:32.321 [2024-07-25 10:45:01.810625] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62840 ] 00:06:32.321 { 00:06:32.321 "subsystems": [ 00:06:32.321 { 00:06:32.321 "subsystem": "bdev", 00:06:32.321 "config": [ 00:06:32.321 { 00:06:32.321 "params": { 00:06:32.321 "block_size": 512, 00:06:32.321 "num_blocks": 1048576, 00:06:32.321 "name": "malloc0" 00:06:32.321 }, 00:06:32.321 "method": "bdev_malloc_create" 00:06:32.321 }, 00:06:32.321 { 00:06:32.321 "params": { 00:06:32.321 "block_size": 512, 00:06:32.321 "num_blocks": 1048576, 00:06:32.321 "name": "malloc1" 00:06:32.321 }, 00:06:32.321 "method": "bdev_malloc_create" 00:06:32.321 }, 00:06:32.321 { 00:06:32.321 "method": "bdev_wait_for_examine" 00:06:32.321 } 00:06:32.321 ] 00:06:32.321 } 00:06:32.321 ] 00:06:32.321 } 00:06:32.321 [2024-07-25 10:45:01.946690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.580 [2024-07-25 10:45:02.075409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.580 [2024-07-25 10:45:02.153706] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:36.419  Copying: 207/512 [MB] (207 MBps) Copying: 414/512 [MB] (206 MBps) Copying: 512/512 [MB] (average 207 MBps) 00:06:36.419 00:06:36.419 00:06:36.419 real 0m8.368s 00:06:36.419 user 0m7.064s 00:06:36.419 sys 0m1.146s 00:06:36.419 10:45:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.419 10:45:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:36.419 ************************************ 00:06:36.419 END TEST dd_malloc_copy 00:06:36.419 ************************************ 00:06:36.419 00:06:36.419 real 0m8.513s 00:06:36.419 user 0m7.130s 00:06:36.419 sys 0m1.224s 00:06:36.419 10:45:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.419 10:45:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:36.419 ************************************ 00:06:36.419 END TEST spdk_dd_malloc 00:06:36.419 ************************************ 00:06:36.419 10:45:06 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:36.419 10:45:06 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:36.419 10:45:06 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.419 10:45:06 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:36.419 ************************************ 00:06:36.419 START TEST spdk_dd_bdev_to_bdev 00:06:36.419 ************************************ 00:06:36.419 10:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:36.419 * Looking for test storage... 00:06:36.419 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:36.419 10:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:36.420 10:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:36.420 10:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:36.420 10:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:36.420 10:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.420 10:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.420 10:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.420 10:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:06:36.420 10:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.420 10:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:06:36.420 10:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:06:36.420 10:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:06:36.420 10:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:06:36.420 10:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:06:36.420 10:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:06:36.420 10:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:06:36.420 10:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:06:36.420 10:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:06:36.420 10:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:06:36.420 10:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:36.420 10:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:36.420 10:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:06:36.420 10:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:06:36.420 10:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:36.420 10:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:36.420 10:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:06:36.420 10:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:06:36.420 10:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:36.420 10:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:36.420 10:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.420 10:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:36.420 ************************************ 00:06:36.420 START TEST dd_inflate_file 00:06:36.420 ************************************ 00:06:36.420 10:45:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:36.679 [2024-07-25 10:45:06.187369] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:36.679 [2024-07-25 10:45:06.187489] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62950 ] 00:06:36.679 [2024-07-25 10:45:06.326403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.938 [2024-07-25 10:45:06.456643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.938 [2024-07-25 10:45:06.531492] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:37.197  Copying: 64/64 [MB] (average 1600 MBps) 00:06:37.197 00:06:37.197 00:06:37.197 real 0m0.763s 00:06:37.197 user 0m0.463s 00:06:37.197 sys 0m0.380s 00:06:37.197 10:45:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.197 ************************************ 00:06:37.197 END TEST dd_inflate_file 00:06:37.197 10:45:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:06:37.197 ************************************ 00:06:37.456 10:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:06:37.456 10:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:06:37.456 10:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:37.456 10:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:06:37.456 10:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:37.456 10:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:37.456 10:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:37.456 10:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.456 10:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:37.456 ************************************ 00:06:37.456 START TEST dd_copy_to_out_bdev 00:06:37.456 ************************************ 00:06:37.456 10:45:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:37.456 { 00:06:37.456 "subsystems": [ 00:06:37.456 { 00:06:37.456 "subsystem": "bdev", 00:06:37.456 "config": [ 00:06:37.456 { 00:06:37.456 "params": { 00:06:37.456 "trtype": "pcie", 00:06:37.456 "traddr": "0000:00:10.0", 00:06:37.456 "name": "Nvme0" 00:06:37.456 }, 00:06:37.456 "method": "bdev_nvme_attach_controller" 00:06:37.456 }, 00:06:37.456 { 00:06:37.456 "params": { 00:06:37.456 "trtype": "pcie", 00:06:37.456 "traddr": "0000:00:11.0", 00:06:37.456 "name": "Nvme1" 00:06:37.456 }, 00:06:37.456 "method": "bdev_nvme_attach_controller" 00:06:37.456 }, 00:06:37.456 { 00:06:37.456 "method": "bdev_wait_for_examine" 00:06:37.456 } 00:06:37.456 ] 00:06:37.456 } 00:06:37.456 ] 00:06:37.456 } 00:06:37.456 [2024-07-25 10:45:07.011359] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:37.456 [2024-07-25 10:45:07.011469] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62989 ] 00:06:37.456 [2024-07-25 10:45:07.151676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.715 [2024-07-25 10:45:07.279771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.715 [2024-07-25 10:45:07.357017] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:39.355  Copying: 54/64 [MB] (54 MBps) Copying: 64/64 [MB] (average 54 MBps) 00:06:39.355 00:06:39.355 00:06:39.355 real 0m2.121s 00:06:39.355 user 0m1.839s 00:06:39.355 sys 0m1.623s 00:06:39.355 10:45:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.355 ************************************ 00:06:39.355 END TEST dd_copy_to_out_bdev 00:06:39.355 ************************************ 00:06:39.355 10:45:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:39.614 10:45:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:06:39.614 10:45:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:06:39.614 10:45:09 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:39.614 10:45:09 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.614 10:45:09 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:39.614 ************************************ 00:06:39.614 START TEST dd_offset_magic 00:06:39.614 ************************************ 00:06:39.614 10:45:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1125 -- # offset_magic 00:06:39.614 10:45:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:06:39.614 10:45:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:06:39.614 10:45:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:06:39.614 10:45:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:39.614 10:45:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:06:39.614 10:45:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:39.614 10:45:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:39.614 10:45:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:39.614 [2024-07-25 10:45:09.180494] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:39.614 [2024-07-25 10:45:09.180563] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63034 ] 00:06:39.614 { 00:06:39.614 "subsystems": [ 00:06:39.614 { 00:06:39.614 "subsystem": "bdev", 00:06:39.614 "config": [ 00:06:39.614 { 00:06:39.614 "params": { 00:06:39.614 "trtype": "pcie", 00:06:39.614 "traddr": "0000:00:10.0", 00:06:39.614 "name": "Nvme0" 00:06:39.614 }, 00:06:39.614 "method": "bdev_nvme_attach_controller" 00:06:39.614 }, 00:06:39.614 { 00:06:39.614 "params": { 00:06:39.614 "trtype": "pcie", 00:06:39.614 "traddr": "0000:00:11.0", 00:06:39.614 "name": "Nvme1" 00:06:39.614 }, 00:06:39.614 "method": "bdev_nvme_attach_controller" 00:06:39.614 }, 00:06:39.614 { 00:06:39.614 "method": "bdev_wait_for_examine" 00:06:39.614 } 00:06:39.614 ] 00:06:39.614 } 00:06:39.614 ] 00:06:39.614 } 00:06:39.614 [2024-07-25 10:45:09.311931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.873 [2024-07-25 10:45:09.431898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.873 [2024-07-25 10:45:09.509480] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:40.391  Copying: 65/65 [MB] (average 833 MBps) 00:06:40.391 00:06:40.391 10:45:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:40.391 10:45:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:06:40.391 10:45:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:40.391 10:45:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:40.650 [2024-07-25 10:45:10.173771] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:40.650 [2024-07-25 10:45:10.173894] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63054 ] 00:06:40.650 { 00:06:40.650 "subsystems": [ 00:06:40.650 { 00:06:40.650 "subsystem": "bdev", 00:06:40.650 "config": [ 00:06:40.650 { 00:06:40.650 "params": { 00:06:40.650 "trtype": "pcie", 00:06:40.650 "traddr": "0000:00:10.0", 00:06:40.650 "name": "Nvme0" 00:06:40.650 }, 00:06:40.650 "method": "bdev_nvme_attach_controller" 00:06:40.650 }, 00:06:40.650 { 00:06:40.650 "params": { 00:06:40.650 "trtype": "pcie", 00:06:40.650 "traddr": "0000:00:11.0", 00:06:40.650 "name": "Nvme1" 00:06:40.650 }, 00:06:40.650 "method": "bdev_nvme_attach_controller" 00:06:40.650 }, 00:06:40.650 { 00:06:40.650 "method": "bdev_wait_for_examine" 00:06:40.650 } 00:06:40.650 ] 00:06:40.650 } 00:06:40.650 ] 00:06:40.650 } 00:06:40.650 [2024-07-25 10:45:10.311920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.909 [2024-07-25 10:45:10.435594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.909 [2024-07-25 10:45:10.511558] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:41.427  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:41.427 00:06:41.427 10:45:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:41.427 10:45:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:41.427 10:45:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:41.427 10:45:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:06:41.427 10:45:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:41.427 10:45:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:41.427 10:45:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:41.427 [2024-07-25 10:45:11.090790] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:41.427 [2024-07-25 10:45:11.091271] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63076 ] 00:06:41.427 { 00:06:41.427 "subsystems": [ 00:06:41.427 { 00:06:41.427 "subsystem": "bdev", 00:06:41.427 "config": [ 00:06:41.427 { 00:06:41.427 "params": { 00:06:41.427 "trtype": "pcie", 00:06:41.427 "traddr": "0000:00:10.0", 00:06:41.427 "name": "Nvme0" 00:06:41.427 }, 00:06:41.427 "method": "bdev_nvme_attach_controller" 00:06:41.427 }, 00:06:41.427 { 00:06:41.427 "params": { 00:06:41.427 "trtype": "pcie", 00:06:41.427 "traddr": "0000:00:11.0", 00:06:41.427 "name": "Nvme1" 00:06:41.427 }, 00:06:41.427 "method": "bdev_nvme_attach_controller" 00:06:41.427 }, 00:06:41.427 { 00:06:41.427 "method": "bdev_wait_for_examine" 00:06:41.427 } 00:06:41.427 ] 00:06:41.427 } 00:06:41.427 ] 00:06:41.427 } 00:06:41.686 [2024-07-25 10:45:11.224469] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.686 [2024-07-25 10:45:11.358145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.944 [2024-07-25 10:45:11.432127] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:42.512  Copying: 65/65 [MB] (average 902 MBps) 00:06:42.512 00:06:42.512 10:45:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:06:42.512 10:45:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:42.512 10:45:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:42.512 10:45:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:42.512 { 00:06:42.512 "subsystems": [ 00:06:42.512 { 00:06:42.512 "subsystem": "bdev", 00:06:42.512 "config": [ 00:06:42.512 { 00:06:42.512 "params": { 00:06:42.512 "trtype": "pcie", 00:06:42.512 "traddr": "0000:00:10.0", 00:06:42.512 "name": "Nvme0" 00:06:42.512 }, 00:06:42.512 "method": "bdev_nvme_attach_controller" 00:06:42.512 }, 00:06:42.512 { 00:06:42.512 "params": { 00:06:42.512 "trtype": "pcie", 00:06:42.512 "traddr": "0000:00:11.0", 00:06:42.512 "name": "Nvme1" 00:06:42.512 }, 00:06:42.512 "method": "bdev_nvme_attach_controller" 00:06:42.512 }, 00:06:42.512 { 00:06:42.512 "method": "bdev_wait_for_examine" 00:06:42.512 } 00:06:42.512 ] 00:06:42.512 } 00:06:42.512 ] 00:06:42.512 } 00:06:42.512 [2024-07-25 10:45:12.092138] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:42.512 [2024-07-25 10:45:12.092275] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63095 ] 00:06:42.512 [2024-07-25 10:45:12.235247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.771 [2024-07-25 10:45:12.376689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.771 [2024-07-25 10:45:12.448028] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:43.289  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:43.289 00:06:43.289 10:45:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:43.289 10:45:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:43.289 00:06:43.289 real 0m3.793s 00:06:43.289 user 0m2.748s 00:06:43.289 sys 0m1.190s 00:06:43.289 10:45:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.289 10:45:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:43.289 ************************************ 00:06:43.289 END TEST dd_offset_magic 00:06:43.289 ************************************ 00:06:43.289 10:45:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:06:43.289 10:45:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:06:43.289 10:45:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:43.289 10:45:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:43.289 10:45:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:43.289 10:45:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:43.289 10:45:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:43.289 10:45:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:06:43.289 10:45:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:43.289 10:45:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:43.289 10:45:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:43.289 [2024-07-25 10:45:13.018756] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:43.289 [2024-07-25 10:45:13.018857] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63128 ] 00:06:43.549 { 00:06:43.549 "subsystems": [ 00:06:43.549 { 00:06:43.549 "subsystem": "bdev", 00:06:43.549 "config": [ 00:06:43.549 { 00:06:43.549 "params": { 00:06:43.549 "trtype": "pcie", 00:06:43.549 "traddr": "0000:00:10.0", 00:06:43.549 "name": "Nvme0" 00:06:43.549 }, 00:06:43.549 "method": "bdev_nvme_attach_controller" 00:06:43.549 }, 00:06:43.549 { 00:06:43.549 "params": { 00:06:43.549 "trtype": "pcie", 00:06:43.549 "traddr": "0000:00:11.0", 00:06:43.549 "name": "Nvme1" 00:06:43.549 }, 00:06:43.549 "method": "bdev_nvme_attach_controller" 00:06:43.549 }, 00:06:43.549 { 00:06:43.549 "method": "bdev_wait_for_examine" 00:06:43.549 } 00:06:43.549 ] 00:06:43.549 } 00:06:43.549 ] 00:06:43.549 } 00:06:43.549 [2024-07-25 10:45:13.154499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.549 [2024-07-25 10:45:13.276291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.810 [2024-07-25 10:45:13.352327] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:44.378  Copying: 5120/5120 [kB] (average 1000 MBps) 00:06:44.378 00:06:44.378 10:45:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:06:44.378 10:45:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:06:44.378 10:45:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:44.378 10:45:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:44.378 10:45:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:44.378 10:45:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:44.378 10:45:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:06:44.378 10:45:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:44.379 10:45:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:44.379 10:45:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:44.379 [2024-07-25 10:45:13.876299] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:44.379 [2024-07-25 10:45:13.876385] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63149 ] 00:06:44.379 { 00:06:44.379 "subsystems": [ 00:06:44.379 { 00:06:44.379 "subsystem": "bdev", 00:06:44.379 "config": [ 00:06:44.379 { 00:06:44.379 "params": { 00:06:44.379 "trtype": "pcie", 00:06:44.379 "traddr": "0000:00:10.0", 00:06:44.379 "name": "Nvme0" 00:06:44.379 }, 00:06:44.379 "method": "bdev_nvme_attach_controller" 00:06:44.379 }, 00:06:44.379 { 00:06:44.379 "params": { 00:06:44.379 "trtype": "pcie", 00:06:44.379 "traddr": "0000:00:11.0", 00:06:44.379 "name": "Nvme1" 00:06:44.379 }, 00:06:44.379 "method": "bdev_nvme_attach_controller" 00:06:44.379 }, 00:06:44.379 { 00:06:44.379 "method": "bdev_wait_for_examine" 00:06:44.379 } 00:06:44.379 ] 00:06:44.379 } 00:06:44.379 ] 00:06:44.379 } 00:06:44.379 [2024-07-25 10:45:14.007915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.637 [2024-07-25 10:45:14.120004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.637 [2024-07-25 10:45:14.195530] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:45.156  Copying: 5120/5120 [kB] (average 833 MBps) 00:06:45.156 00:06:45.156 10:45:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:06:45.156 ************************************ 00:06:45.156 END TEST spdk_dd_bdev_to_bdev 00:06:45.156 ************************************ 00:06:45.156 00:06:45.156 real 0m8.668s 00:06:45.156 user 0m6.379s 00:06:45.156 sys 0m4.018s 00:06:45.156 10:45:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.156 10:45:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:45.156 10:45:14 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:06:45.156 10:45:14 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:45.156 10:45:14 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.156 10:45:14 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.156 10:45:14 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:45.156 ************************************ 00:06:45.156 START TEST spdk_dd_uring 00:06:45.156 ************************************ 00:06:45.156 10:45:14 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:45.156 * Looking for test storage... 00:06:45.156 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:45.156 10:45:14 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:45.156 10:45:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:45.156 10:45:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:45.156 10:45:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:45.156 10:45:14 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.156 10:45:14 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.156 10:45:14 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.156 10:45:14 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:06:45.156 10:45:14 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.156 10:45:14 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:06:45.156 10:45:14 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.156 10:45:14 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.156 10:45:14 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:06:45.156 ************************************ 00:06:45.156 START TEST dd_uring_copy 00:06:45.156 ************************************ 00:06:45.156 10:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1125 -- # uring_zram_copy 00:06:45.156 10:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:06:45.156 10:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:06:45.156 10:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:06:45.156 10:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:45.156 10:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:06:45.156 10:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:06:45.156 10:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:06:45.156 10:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:06:45.156 10:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:06:45.156 10:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:06:45.156 10:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:06:45.156 10:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:06:45.156 10:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:06:45.156 10:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:06:45.156 10:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:06:45.156 10:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:06:45.157 10:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:06:45.157 10:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:06:45.157 10:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:06:45.157 10:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:06:45.157 10:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:45.157 10:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:06:45.157 10:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:06:45.157 10:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:06:45.157 10:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:45.157 10:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=jf9gxg74a7xpuzbio6pqop5mngzse4naufk86zlmq7wnt28rlxiyh4e8evwtsrsjcze6qh2izu40x75tkuxmctuyyxnxgwgj94ertx0tu1sp8r54zdttnls5o2r3za3p8s30rfmk4pi3wufe2ywp5r050hck1uem3nxbcg6hoqiis1qku37wgu17r3hmssnrs1o0x09gjihofvtr87gryyplfx87fys2u81tq3p64huk8vsxueofhckhjdcntrlwmde6xjumujojg3h7bwbe797erl6pwejpyv26fy5d5dzh7xreanezf4djwytwz0ysjr5b06kc2l3aqh5rvtrzwfvj55uqpf7400p8a75rrdffm6s9ahtjspn2dbj8d67d53hernn0rgqxs7qoz1qcztbfbnlcg1o2r6yichiudz0m9avm0kt89t4volg5pzschnobza5gx5xkfyr7331uv32i28tbsomln3m0ua0j1bzu6t2ahetg9iqa2vtepy3txqq0i561tp8yivd3413khml0h45y7rdv3ui2bx8ma8b6eeg9dkxfxkplornwc6kd4xz9fu3k6bk8j32svxw6bwx1lh2na3e15rflrs9ffb1idtue396xh45dbkffi8q9yo2mxro76bcx4m7gcr0zryxkg4f0zfjh4uhpuomvnd7ew718x3p9euq4j95ceqhu0t1xbd8h7layag9sz52em65jcvwo2yn8fd9hagkw68gi56ugmi7c4o88v2y31rpt93dh22kybvj9lk05f6dfwgf6bmi1w1zjqrgdy1r1ozhhkzcr5lqjn3vult7oynj0f9swocbglbcjp09lth5flhh3bjg7zi4damr31ic5vjtmm5y8lfm4ca9inth5hhvtjtzql2zh9gkicw0nk0lcddip1x9sepqzwrj0eesa8zj8dutw0rawy9g9vj326vdniy2nceiylhwhn2vetizfcvo2ultln9kuxq5iaqdol2e0xa6kq0ro6grzl97beym1 00:06:45.157 10:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo jf9gxg74a7xpuzbio6pqop5mngzse4naufk86zlmq7wnt28rlxiyh4e8evwtsrsjcze6qh2izu40x75tkuxmctuyyxnxgwgj94ertx0tu1sp8r54zdttnls5o2r3za3p8s30rfmk4pi3wufe2ywp5r050hck1uem3nxbcg6hoqiis1qku37wgu17r3hmssnrs1o0x09gjihofvtr87gryyplfx87fys2u81tq3p64huk8vsxueofhckhjdcntrlwmde6xjumujojg3h7bwbe797erl6pwejpyv26fy5d5dzh7xreanezf4djwytwz0ysjr5b06kc2l3aqh5rvtrzwfvj55uqpf7400p8a75rrdffm6s9ahtjspn2dbj8d67d53hernn0rgqxs7qoz1qcztbfbnlcg1o2r6yichiudz0m9avm0kt89t4volg5pzschnobza5gx5xkfyr7331uv32i28tbsomln3m0ua0j1bzu6t2ahetg9iqa2vtepy3txqq0i561tp8yivd3413khml0h45y7rdv3ui2bx8ma8b6eeg9dkxfxkplornwc6kd4xz9fu3k6bk8j32svxw6bwx1lh2na3e15rflrs9ffb1idtue396xh45dbkffi8q9yo2mxro76bcx4m7gcr0zryxkg4f0zfjh4uhpuomvnd7ew718x3p9euq4j95ceqhu0t1xbd8h7layag9sz52em65jcvwo2yn8fd9hagkw68gi56ugmi7c4o88v2y31rpt93dh22kybvj9lk05f6dfwgf6bmi1w1zjqrgdy1r1ozhhkzcr5lqjn3vult7oynj0f9swocbglbcjp09lth5flhh3bjg7zi4damr31ic5vjtmm5y8lfm4ca9inth5hhvtjtzql2zh9gkicw0nk0lcddip1x9sepqzwrj0eesa8zj8dutw0rawy9g9vj326vdniy2nceiylhwhn2vetizfcvo2ultln9kuxq5iaqdol2e0xa6kq0ro6grzl97beym1 00:06:45.157 10:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:06:45.416 [2024-07-25 10:45:14.943350] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:45.416 [2024-07-25 10:45:14.943466] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63220 ] 00:06:45.416 [2024-07-25 10:45:15.079926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.674 [2024-07-25 10:45:15.205271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.674 [2024-07-25 10:45:15.278947] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:46.809  Copying: 511/511 [MB] (average 1387 MBps) 00:06:46.809 00:06:46.809 10:45:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:06:46.809 10:45:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:06:46.809 10:45:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:46.809 10:45:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:47.068 [2024-07-25 10:45:16.563383] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:47.068 [2024-07-25 10:45:16.563481] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63242 ] 00:06:47.068 { 00:06:47.068 "subsystems": [ 00:06:47.068 { 00:06:47.068 "subsystem": "bdev", 00:06:47.068 "config": [ 00:06:47.068 { 00:06:47.068 "params": { 00:06:47.068 "block_size": 512, 00:06:47.068 "num_blocks": 1048576, 00:06:47.068 "name": "malloc0" 00:06:47.068 }, 00:06:47.068 "method": "bdev_malloc_create" 00:06:47.068 }, 00:06:47.068 { 00:06:47.068 "params": { 00:06:47.068 "filename": "/dev/zram1", 00:06:47.068 "name": "uring0" 00:06:47.068 }, 00:06:47.068 "method": "bdev_uring_create" 00:06:47.068 }, 00:06:47.068 { 00:06:47.068 "method": "bdev_wait_for_examine" 00:06:47.068 } 00:06:47.068 ] 00:06:47.068 } 00:06:47.068 ] 00:06:47.068 } 00:06:47.068 [2024-07-25 10:45:16.695487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.328 [2024-07-25 10:45:16.825633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.328 [2024-07-25 10:45:16.898884] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:50.479  Copying: 220/512 [MB] (220 MBps) Copying: 444/512 [MB] (224 MBps) Copying: 512/512 [MB] (average 222 MBps) 00:06:50.479 00:06:50.480 10:45:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:06:50.480 10:45:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:06:50.480 10:45:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:50.480 10:45:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:50.480 [2024-07-25 10:45:20.088358] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:50.480 [2024-07-25 10:45:20.088434] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63293 ] 00:06:50.480 { 00:06:50.480 "subsystems": [ 00:06:50.480 { 00:06:50.480 "subsystem": "bdev", 00:06:50.480 "config": [ 00:06:50.480 { 00:06:50.480 "params": { 00:06:50.480 "block_size": 512, 00:06:50.480 "num_blocks": 1048576, 00:06:50.480 "name": "malloc0" 00:06:50.480 }, 00:06:50.480 "method": "bdev_malloc_create" 00:06:50.480 }, 00:06:50.480 { 00:06:50.480 "params": { 00:06:50.480 "filename": "/dev/zram1", 00:06:50.480 "name": "uring0" 00:06:50.480 }, 00:06:50.480 "method": "bdev_uring_create" 00:06:50.480 }, 00:06:50.480 { 00:06:50.480 "method": "bdev_wait_for_examine" 00:06:50.480 } 00:06:50.480 ] 00:06:50.480 } 00:06:50.480 ] 00:06:50.480 } 00:06:50.738 [2024-07-25 10:45:20.226020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.738 [2024-07-25 10:45:20.360057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.738 [2024-07-25 10:45:20.433282] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:54.592  Copying: 189/512 [MB] (189 MBps) Copying: 355/512 [MB] (166 MBps) Copying: 512/512 [MB] (average 175 MBps) 00:06:54.592 00:06:54.592 10:45:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:06:54.592 10:45:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ jf9gxg74a7xpuzbio6pqop5mngzse4naufk86zlmq7wnt28rlxiyh4e8evwtsrsjcze6qh2izu40x75tkuxmctuyyxnxgwgj94ertx0tu1sp8r54zdttnls5o2r3za3p8s30rfmk4pi3wufe2ywp5r050hck1uem3nxbcg6hoqiis1qku37wgu17r3hmssnrs1o0x09gjihofvtr87gryyplfx87fys2u81tq3p64huk8vsxueofhckhjdcntrlwmde6xjumujojg3h7bwbe797erl6pwejpyv26fy5d5dzh7xreanezf4djwytwz0ysjr5b06kc2l3aqh5rvtrzwfvj55uqpf7400p8a75rrdffm6s9ahtjspn2dbj8d67d53hernn0rgqxs7qoz1qcztbfbnlcg1o2r6yichiudz0m9avm0kt89t4volg5pzschnobza5gx5xkfyr7331uv32i28tbsomln3m0ua0j1bzu6t2ahetg9iqa2vtepy3txqq0i561tp8yivd3413khml0h45y7rdv3ui2bx8ma8b6eeg9dkxfxkplornwc6kd4xz9fu3k6bk8j32svxw6bwx1lh2na3e15rflrs9ffb1idtue396xh45dbkffi8q9yo2mxro76bcx4m7gcr0zryxkg4f0zfjh4uhpuomvnd7ew718x3p9euq4j95ceqhu0t1xbd8h7layag9sz52em65jcvwo2yn8fd9hagkw68gi56ugmi7c4o88v2y31rpt93dh22kybvj9lk05f6dfwgf6bmi1w1zjqrgdy1r1ozhhkzcr5lqjn3vult7oynj0f9swocbglbcjp09lth5flhh3bjg7zi4damr31ic5vjtmm5y8lfm4ca9inth5hhvtjtzql2zh9gkicw0nk0lcddip1x9sepqzwrj0eesa8zj8dutw0rawy9g9vj326vdniy2nceiylhwhn2vetizfcvo2ultln9kuxq5iaqdol2e0xa6kq0ro6grzl97beym1 == \j\f\9\g\x\g\7\4\a\7\x\p\u\z\b\i\o\6\p\q\o\p\5\m\n\g\z\s\e\4\n\a\u\f\k\8\6\z\l\m\q\7\w\n\t\2\8\r\l\x\i\y\h\4\e\8\e\v\w\t\s\r\s\j\c\z\e\6\q\h\2\i\z\u\4\0\x\7\5\t\k\u\x\m\c\t\u\y\y\x\n\x\g\w\g\j\9\4\e\r\t\x\0\t\u\1\s\p\8\r\5\4\z\d\t\t\n\l\s\5\o\2\r\3\z\a\3\p\8\s\3\0\r\f\m\k\4\p\i\3\w\u\f\e\2\y\w\p\5\r\0\5\0\h\c\k\1\u\e\m\3\n\x\b\c\g\6\h\o\q\i\i\s\1\q\k\u\3\7\w\g\u\1\7\r\3\h\m\s\s\n\r\s\1\o\0\x\0\9\g\j\i\h\o\f\v\t\r\8\7\g\r\y\y\p\l\f\x\8\7\f\y\s\2\u\8\1\t\q\3\p\6\4\h\u\k\8\v\s\x\u\e\o\f\h\c\k\h\j\d\c\n\t\r\l\w\m\d\e\6\x\j\u\m\u\j\o\j\g\3\h\7\b\w\b\e\7\9\7\e\r\l\6\p\w\e\j\p\y\v\2\6\f\y\5\d\5\d\z\h\7\x\r\e\a\n\e\z\f\4\d\j\w\y\t\w\z\0\y\s\j\r\5\b\0\6\k\c\2\l\3\a\q\h\5\r\v\t\r\z\w\f\v\j\5\5\u\q\p\f\7\4\0\0\p\8\a\7\5\r\r\d\f\f\m\6\s\9\a\h\t\j\s\p\n\2\d\b\j\8\d\6\7\d\5\3\h\e\r\n\n\0\r\g\q\x\s\7\q\o\z\1\q\c\z\t\b\f\b\n\l\c\g\1\o\2\r\6\y\i\c\h\i\u\d\z\0\m\9\a\v\m\0\k\t\8\9\t\4\v\o\l\g\5\p\z\s\c\h\n\o\b\z\a\5\g\x\5\x\k\f\y\r\7\3\3\1\u\v\3\2\i\2\8\t\b\s\o\m\l\n\3\m\0\u\a\0\j\1\b\z\u\6\t\2\a\h\e\t\g\9\i\q\a\2\v\t\e\p\y\3\t\x\q\q\0\i\5\6\1\t\p\8\y\i\v\d\3\4\1\3\k\h\m\l\0\h\4\5\y\7\r\d\v\3\u\i\2\b\x\8\m\a\8\b\6\e\e\g\9\d\k\x\f\x\k\p\l\o\r\n\w\c\6\k\d\4\x\z\9\f\u\3\k\6\b\k\8\j\3\2\s\v\x\w\6\b\w\x\1\l\h\2\n\a\3\e\1\5\r\f\l\r\s\9\f\f\b\1\i\d\t\u\e\3\9\6\x\h\4\5\d\b\k\f\f\i\8\q\9\y\o\2\m\x\r\o\7\6\b\c\x\4\m\7\g\c\r\0\z\r\y\x\k\g\4\f\0\z\f\j\h\4\u\h\p\u\o\m\v\n\d\7\e\w\7\1\8\x\3\p\9\e\u\q\4\j\9\5\c\e\q\h\u\0\t\1\x\b\d\8\h\7\l\a\y\a\g\9\s\z\5\2\e\m\6\5\j\c\v\w\o\2\y\n\8\f\d\9\h\a\g\k\w\6\8\g\i\5\6\u\g\m\i\7\c\4\o\8\8\v\2\y\3\1\r\p\t\9\3\d\h\2\2\k\y\b\v\j\9\l\k\0\5\f\6\d\f\w\g\f\6\b\m\i\1\w\1\z\j\q\r\g\d\y\1\r\1\o\z\h\h\k\z\c\r\5\l\q\j\n\3\v\u\l\t\7\o\y\n\j\0\f\9\s\w\o\c\b\g\l\b\c\j\p\0\9\l\t\h\5\f\l\h\h\3\b\j\g\7\z\i\4\d\a\m\r\3\1\i\c\5\v\j\t\m\m\5\y\8\l\f\m\4\c\a\9\i\n\t\h\5\h\h\v\t\j\t\z\q\l\2\z\h\9\g\k\i\c\w\0\n\k\0\l\c\d\d\i\p\1\x\9\s\e\p\q\z\w\r\j\0\e\e\s\a\8\z\j\8\d\u\t\w\0\r\a\w\y\9\g\9\v\j\3\2\6\v\d\n\i\y\2\n\c\e\i\y\l\h\w\h\n\2\v\e\t\i\z\f\c\v\o\2\u\l\t\l\n\9\k\u\x\q\5\i\a\q\d\o\l\2\e\0\x\a\6\k\q\0\r\o\6\g\r\z\l\9\7\b\e\y\m\1 ]] 00:06:54.592 10:45:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:06:54.592 10:45:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ jf9gxg74a7xpuzbio6pqop5mngzse4naufk86zlmq7wnt28rlxiyh4e8evwtsrsjcze6qh2izu40x75tkuxmctuyyxnxgwgj94ertx0tu1sp8r54zdttnls5o2r3za3p8s30rfmk4pi3wufe2ywp5r050hck1uem3nxbcg6hoqiis1qku37wgu17r3hmssnrs1o0x09gjihofvtr87gryyplfx87fys2u81tq3p64huk8vsxueofhckhjdcntrlwmde6xjumujojg3h7bwbe797erl6pwejpyv26fy5d5dzh7xreanezf4djwytwz0ysjr5b06kc2l3aqh5rvtrzwfvj55uqpf7400p8a75rrdffm6s9ahtjspn2dbj8d67d53hernn0rgqxs7qoz1qcztbfbnlcg1o2r6yichiudz0m9avm0kt89t4volg5pzschnobza5gx5xkfyr7331uv32i28tbsomln3m0ua0j1bzu6t2ahetg9iqa2vtepy3txqq0i561tp8yivd3413khml0h45y7rdv3ui2bx8ma8b6eeg9dkxfxkplornwc6kd4xz9fu3k6bk8j32svxw6bwx1lh2na3e15rflrs9ffb1idtue396xh45dbkffi8q9yo2mxro76bcx4m7gcr0zryxkg4f0zfjh4uhpuomvnd7ew718x3p9euq4j95ceqhu0t1xbd8h7layag9sz52em65jcvwo2yn8fd9hagkw68gi56ugmi7c4o88v2y31rpt93dh22kybvj9lk05f6dfwgf6bmi1w1zjqrgdy1r1ozhhkzcr5lqjn3vult7oynj0f9swocbglbcjp09lth5flhh3bjg7zi4damr31ic5vjtmm5y8lfm4ca9inth5hhvtjtzql2zh9gkicw0nk0lcddip1x9sepqzwrj0eesa8zj8dutw0rawy9g9vj326vdniy2nceiylhwhn2vetizfcvo2ultln9kuxq5iaqdol2e0xa6kq0ro6grzl97beym1 == \j\f\9\g\x\g\7\4\a\7\x\p\u\z\b\i\o\6\p\q\o\p\5\m\n\g\z\s\e\4\n\a\u\f\k\8\6\z\l\m\q\7\w\n\t\2\8\r\l\x\i\y\h\4\e\8\e\v\w\t\s\r\s\j\c\z\e\6\q\h\2\i\z\u\4\0\x\7\5\t\k\u\x\m\c\t\u\y\y\x\n\x\g\w\g\j\9\4\e\r\t\x\0\t\u\1\s\p\8\r\5\4\z\d\t\t\n\l\s\5\o\2\r\3\z\a\3\p\8\s\3\0\r\f\m\k\4\p\i\3\w\u\f\e\2\y\w\p\5\r\0\5\0\h\c\k\1\u\e\m\3\n\x\b\c\g\6\h\o\q\i\i\s\1\q\k\u\3\7\w\g\u\1\7\r\3\h\m\s\s\n\r\s\1\o\0\x\0\9\g\j\i\h\o\f\v\t\r\8\7\g\r\y\y\p\l\f\x\8\7\f\y\s\2\u\8\1\t\q\3\p\6\4\h\u\k\8\v\s\x\u\e\o\f\h\c\k\h\j\d\c\n\t\r\l\w\m\d\e\6\x\j\u\m\u\j\o\j\g\3\h\7\b\w\b\e\7\9\7\e\r\l\6\p\w\e\j\p\y\v\2\6\f\y\5\d\5\d\z\h\7\x\r\e\a\n\e\z\f\4\d\j\w\y\t\w\z\0\y\s\j\r\5\b\0\6\k\c\2\l\3\a\q\h\5\r\v\t\r\z\w\f\v\j\5\5\u\q\p\f\7\4\0\0\p\8\a\7\5\r\r\d\f\f\m\6\s\9\a\h\t\j\s\p\n\2\d\b\j\8\d\6\7\d\5\3\h\e\r\n\n\0\r\g\q\x\s\7\q\o\z\1\q\c\z\t\b\f\b\n\l\c\g\1\o\2\r\6\y\i\c\h\i\u\d\z\0\m\9\a\v\m\0\k\t\8\9\t\4\v\o\l\g\5\p\z\s\c\h\n\o\b\z\a\5\g\x\5\x\k\f\y\r\7\3\3\1\u\v\3\2\i\2\8\t\b\s\o\m\l\n\3\m\0\u\a\0\j\1\b\z\u\6\t\2\a\h\e\t\g\9\i\q\a\2\v\t\e\p\y\3\t\x\q\q\0\i\5\6\1\t\p\8\y\i\v\d\3\4\1\3\k\h\m\l\0\h\4\5\y\7\r\d\v\3\u\i\2\b\x\8\m\a\8\b\6\e\e\g\9\d\k\x\f\x\k\p\l\o\r\n\w\c\6\k\d\4\x\z\9\f\u\3\k\6\b\k\8\j\3\2\s\v\x\w\6\b\w\x\1\l\h\2\n\a\3\e\1\5\r\f\l\r\s\9\f\f\b\1\i\d\t\u\e\3\9\6\x\h\4\5\d\b\k\f\f\i\8\q\9\y\o\2\m\x\r\o\7\6\b\c\x\4\m\7\g\c\r\0\z\r\y\x\k\g\4\f\0\z\f\j\h\4\u\h\p\u\o\m\v\n\d\7\e\w\7\1\8\x\3\p\9\e\u\q\4\j\9\5\c\e\q\h\u\0\t\1\x\b\d\8\h\7\l\a\y\a\g\9\s\z\5\2\e\m\6\5\j\c\v\w\o\2\y\n\8\f\d\9\h\a\g\k\w\6\8\g\i\5\6\u\g\m\i\7\c\4\o\8\8\v\2\y\3\1\r\p\t\9\3\d\h\2\2\k\y\b\v\j\9\l\k\0\5\f\6\d\f\w\g\f\6\b\m\i\1\w\1\z\j\q\r\g\d\y\1\r\1\o\z\h\h\k\z\c\r\5\l\q\j\n\3\v\u\l\t\7\o\y\n\j\0\f\9\s\w\o\c\b\g\l\b\c\j\p\0\9\l\t\h\5\f\l\h\h\3\b\j\g\7\z\i\4\d\a\m\r\3\1\i\c\5\v\j\t\m\m\5\y\8\l\f\m\4\c\a\9\i\n\t\h\5\h\h\v\t\j\t\z\q\l\2\z\h\9\g\k\i\c\w\0\n\k\0\l\c\d\d\i\p\1\x\9\s\e\p\q\z\w\r\j\0\e\e\s\a\8\z\j\8\d\u\t\w\0\r\a\w\y\9\g\9\v\j\3\2\6\v\d\n\i\y\2\n\c\e\i\y\l\h\w\h\n\2\v\e\t\i\z\f\c\v\o\2\u\l\t\l\n\9\k\u\x\q\5\i\a\q\d\o\l\2\e\0\x\a\6\k\q\0\r\o\6\g\r\z\l\9\7\b\e\y\m\1 ]] 00:06:54.592 10:45:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:55.159 10:45:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:06:55.159 10:45:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:06:55.159 10:45:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:55.159 10:45:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:55.159 [2024-07-25 10:45:24.673594] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:55.159 [2024-07-25 10:45:24.673691] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63362 ] 00:06:55.159 { 00:06:55.159 "subsystems": [ 00:06:55.159 { 00:06:55.159 "subsystem": "bdev", 00:06:55.159 "config": [ 00:06:55.159 { 00:06:55.159 "params": { 00:06:55.159 "block_size": 512, 00:06:55.159 "num_blocks": 1048576, 00:06:55.159 "name": "malloc0" 00:06:55.159 }, 00:06:55.159 "method": "bdev_malloc_create" 00:06:55.159 }, 00:06:55.159 { 00:06:55.159 "params": { 00:06:55.159 "filename": "/dev/zram1", 00:06:55.159 "name": "uring0" 00:06:55.159 }, 00:06:55.159 "method": "bdev_uring_create" 00:06:55.159 }, 00:06:55.159 { 00:06:55.159 "method": "bdev_wait_for_examine" 00:06:55.159 } 00:06:55.159 ] 00:06:55.159 } 00:06:55.159 ] 00:06:55.159 } 00:06:55.159 [2024-07-25 10:45:24.807662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.418 [2024-07-25 10:45:24.939054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.418 [2024-07-25 10:45:25.012294] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:59.488  Copying: 156/512 [MB] (156 MBps) Copying: 317/512 [MB] (161 MBps) Copying: 467/512 [MB] (150 MBps) Copying: 512/512 [MB] (average 155 MBps) 00:06:59.488 00:06:59.488 10:45:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:06:59.489 10:45:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:06:59.489 10:45:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:06:59.489 10:45:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:06:59.489 10:45:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:06:59.489 10:45:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:06:59.489 10:45:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:59.489 10:45:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:59.747 [2024-07-25 10:45:29.258642] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:59.747 [2024-07-25 10:45:29.258753] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63432 ] 00:06:59.747 { 00:06:59.747 "subsystems": [ 00:06:59.747 { 00:06:59.747 "subsystem": "bdev", 00:06:59.747 "config": [ 00:06:59.747 { 00:06:59.747 "params": { 00:06:59.747 "block_size": 512, 00:06:59.747 "num_blocks": 1048576, 00:06:59.747 "name": "malloc0" 00:06:59.747 }, 00:06:59.747 "method": "bdev_malloc_create" 00:06:59.747 }, 00:06:59.747 { 00:06:59.747 "params": { 00:06:59.747 "filename": "/dev/zram1", 00:06:59.747 "name": "uring0" 00:06:59.747 }, 00:06:59.747 "method": "bdev_uring_create" 00:06:59.747 }, 00:06:59.747 { 00:06:59.747 "params": { 00:06:59.747 "name": "uring0" 00:06:59.748 }, 00:06:59.748 "method": "bdev_uring_delete" 00:06:59.748 }, 00:06:59.748 { 00:06:59.748 "method": "bdev_wait_for_examine" 00:06:59.748 } 00:06:59.748 ] 00:06:59.748 } 00:06:59.748 ] 00:06:59.748 } 00:06:59.748 [2024-07-25 10:45:29.463099] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.006 [2024-07-25 10:45:29.613617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.006 [2024-07-25 10:45:29.690939] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:01.203  Copying: 0/0 [B] (average 0 Bps) 00:07:01.203 00:07:01.203 10:45:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:07:01.203 10:45:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:01.203 10:45:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:07:01.203 10:45:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:01.203 10:45:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # local es=0 00:07:01.203 10:45:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:01.203 10:45:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:01.203 10:45:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.203 10:45:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.203 10:45:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.203 10:45:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.203 10:45:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.203 10:45:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.203 10:45:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.203 10:45:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:01.203 10:45:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:01.203 { 00:07:01.203 "subsystems": [ 00:07:01.203 { 00:07:01.203 "subsystem": "bdev", 00:07:01.203 "config": [ 00:07:01.203 { 00:07:01.203 "params": { 00:07:01.203 "block_size": 512, 00:07:01.203 "num_blocks": 1048576, 00:07:01.203 "name": "malloc0" 00:07:01.203 }, 00:07:01.203 "method": "bdev_malloc_create" 00:07:01.203 }, 00:07:01.203 { 00:07:01.203 "params": { 00:07:01.203 "filename": "/dev/zram1", 00:07:01.203 "name": "uring0" 00:07:01.203 }, 00:07:01.203 "method": "bdev_uring_create" 00:07:01.203 }, 00:07:01.203 { 00:07:01.203 "params": { 00:07:01.204 "name": "uring0" 00:07:01.204 }, 00:07:01.204 "method": "bdev_uring_delete" 00:07:01.204 }, 00:07:01.204 { 00:07:01.204 "method": "bdev_wait_for_examine" 00:07:01.204 } 00:07:01.204 ] 00:07:01.204 } 00:07:01.204 ] 00:07:01.204 } 00:07:01.204 [2024-07-25 10:45:30.696272] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:01.204 [2024-07-25 10:45:30.696372] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63463 ] 00:07:01.204 [2024-07-25 10:45:30.834988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.461 [2024-07-25 10:45:30.994628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.461 [2024-07-25 10:45:31.076465] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:01.719 [2024-07-25 10:45:31.369081] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:07:01.719 [2024-07-25 10:45:31.369137] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:07:01.719 [2024-07-25 10:45:31.369154] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:07:01.719 [2024-07-25 10:45:31.369165] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:02.285 [2024-07-25 10:45:31.868049] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:02.285 10:45:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # es=237 00:07:02.285 10:45:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:02.285 10:45:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@662 -- # es=109 00:07:02.285 10:45:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # case "$es" in 00:07:02.285 10:45:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@670 -- # es=1 00:07:02.285 10:45:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:02.285 10:45:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:07:02.285 10:45:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:07:02.285 10:45:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:07:02.285 10:45:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:07:02.285 10:45:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:07:02.544 10:45:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:02.544 00:07:02.544 real 0m17.361s 00:07:02.544 user 0m11.761s 00:07:02.544 sys 0m13.507s 00:07:02.544 10:45:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.544 ************************************ 00:07:02.544 END TEST dd_uring_copy 00:07:02.544 ************************************ 00:07:02.544 10:45:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:02.544 ************************************ 00:07:02.544 END TEST spdk_dd_uring 00:07:02.544 ************************************ 00:07:02.544 00:07:02.544 real 0m17.503s 00:07:02.544 user 0m11.820s 00:07:02.544 sys 0m13.591s 00:07:02.544 10:45:32 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.544 10:45:32 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:02.803 10:45:32 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:02.803 10:45:32 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:02.803 10:45:32 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.803 10:45:32 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:02.803 ************************************ 00:07:02.803 START TEST spdk_dd_sparse 00:07:02.804 ************************************ 00:07:02.804 10:45:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:02.804 * Looking for test storage... 00:07:02.804 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:02.804 10:45:32 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:02.804 10:45:32 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:02.804 10:45:32 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:02.804 10:45:32 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:02.804 10:45:32 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.804 10:45:32 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.804 10:45:32 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.804 10:45:32 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:07:02.804 10:45:32 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.804 10:45:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:07:02.804 10:45:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:07:02.804 10:45:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:07:02.804 10:45:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:07:02.804 10:45:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:07:02.804 10:45:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:07:02.804 10:45:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:07:02.804 10:45:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:07:02.804 10:45:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:07:02.804 10:45:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:07:02.804 10:45:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:07:02.804 1+0 records in 00:07:02.804 1+0 records out 00:07:02.804 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00766342 s, 547 MB/s 00:07:02.804 10:45:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:07:02.804 1+0 records in 00:07:02.804 1+0 records out 00:07:02.804 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00768882 s, 546 MB/s 00:07:02.804 10:45:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:07:02.804 1+0 records in 00:07:02.804 1+0 records out 00:07:02.804 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00641229 s, 654 MB/s 00:07:02.804 10:45:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:07:02.804 10:45:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:02.804 10:45:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.804 10:45:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:02.804 ************************************ 00:07:02.804 START TEST dd_sparse_file_to_file 00:07:02.804 ************************************ 00:07:02.804 10:45:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1125 -- # file_to_file 00:07:02.804 10:45:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:07:02.804 10:45:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:07:02.804 10:45:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:02.804 10:45:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:07:02.804 10:45:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:07:02.804 10:45:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:07:02.804 10:45:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:07:02.804 10:45:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:07:02.804 10:45:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:02.804 10:45:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:02.804 { 00:07:02.804 "subsystems": [ 00:07:02.804 { 00:07:02.804 "subsystem": "bdev", 00:07:02.804 "config": [ 00:07:02.804 { 00:07:02.804 "params": { 00:07:02.804 "block_size": 4096, 00:07:02.804 "filename": "dd_sparse_aio_disk", 00:07:02.804 "name": "dd_aio" 00:07:02.804 }, 00:07:02.804 "method": "bdev_aio_create" 00:07:02.804 }, 00:07:02.804 { 00:07:02.804 "params": { 00:07:02.804 "lvs_name": "dd_lvstore", 00:07:02.804 "bdev_name": "dd_aio" 00:07:02.804 }, 00:07:02.804 "method": "bdev_lvol_create_lvstore" 00:07:02.804 }, 00:07:02.804 { 00:07:02.804 "method": "bdev_wait_for_examine" 00:07:02.804 } 00:07:02.804 ] 00:07:02.804 } 00:07:02.804 ] 00:07:02.804 } 00:07:02.804 [2024-07-25 10:45:32.522226] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:02.804 [2024-07-25 10:45:32.522495] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63560 ] 00:07:03.063 [2024-07-25 10:45:32.661284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.320 [2024-07-25 10:45:32.811449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.320 [2024-07-25 10:45:32.886832] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:03.886  Copying: 12/36 [MB] (average 857 MBps) 00:07:03.886 00:07:03.886 10:45:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:07:03.886 10:45:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:07:03.886 10:45:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:07:03.886 10:45:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:07:03.886 10:45:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:03.887 10:45:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:07:03.887 10:45:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:07:03.887 10:45:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:07:03.887 ************************************ 00:07:03.887 END TEST dd_sparse_file_to_file 00:07:03.887 ************************************ 00:07:03.887 10:45:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:07:03.887 10:45:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:03.887 00:07:03.887 real 0m0.901s 00:07:03.887 user 0m0.575s 00:07:03.887 sys 0m0.465s 00:07:03.887 10:45:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.887 10:45:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:03.887 10:45:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:07:03.887 10:45:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:03.887 10:45:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.887 10:45:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:03.887 ************************************ 00:07:03.887 START TEST dd_sparse_file_to_bdev 00:07:03.887 ************************************ 00:07:03.887 10:45:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1125 -- # file_to_bdev 00:07:03.887 10:45:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:03.887 10:45:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:07:03.887 10:45:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:07:03.887 10:45:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:07:03.887 10:45:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:07:03.887 10:45:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:07:03.887 10:45:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:03.887 10:45:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:03.887 { 00:07:03.887 "subsystems": [ 00:07:03.887 { 00:07:03.887 "subsystem": "bdev", 00:07:03.887 "config": [ 00:07:03.887 { 00:07:03.887 "params": { 00:07:03.887 "block_size": 4096, 00:07:03.887 "filename": "dd_sparse_aio_disk", 00:07:03.887 "name": "dd_aio" 00:07:03.887 }, 00:07:03.887 "method": "bdev_aio_create" 00:07:03.887 }, 00:07:03.887 { 00:07:03.887 "params": { 00:07:03.887 "lvs_name": "dd_lvstore", 00:07:03.887 "lvol_name": "dd_lvol", 00:07:03.887 "size_in_mib": 36, 00:07:03.887 "thin_provision": true 00:07:03.887 }, 00:07:03.887 "method": "bdev_lvol_create" 00:07:03.887 }, 00:07:03.887 { 00:07:03.887 "method": "bdev_wait_for_examine" 00:07:03.887 } 00:07:03.887 ] 00:07:03.887 } 00:07:03.887 ] 00:07:03.887 } 00:07:03.887 [2024-07-25 10:45:33.471548] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:03.887 [2024-07-25 10:45:33.471869] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63608 ] 00:07:03.887 [2024-07-25 10:45:33.609034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.145 [2024-07-25 10:45:33.746363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.145 [2024-07-25 10:45:33.819153] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:04.663  Copying: 12/36 [MB] (average 428 MBps) 00:07:04.663 00:07:04.663 00:07:04.663 real 0m0.843s 00:07:04.663 user 0m0.553s 00:07:04.663 sys 0m0.441s 00:07:04.663 ************************************ 00:07:04.663 END TEST dd_sparse_file_to_bdev 00:07:04.663 ************************************ 00:07:04.663 10:45:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.663 10:45:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:04.663 10:45:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:07:04.663 10:45:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:04.663 10:45:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.663 10:45:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:04.663 ************************************ 00:07:04.663 START TEST dd_sparse_bdev_to_file 00:07:04.663 ************************************ 00:07:04.663 10:45:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1125 -- # bdev_to_file 00:07:04.663 10:45:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:07:04.663 10:45:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:07:04.663 10:45:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:04.663 10:45:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:07:04.663 10:45:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:07:04.663 10:45:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:07:04.663 10:45:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:04.663 10:45:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:04.663 [2024-07-25 10:45:34.369174] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:04.663 [2024-07-25 10:45:34.369257] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63641 ] 00:07:04.663 { 00:07:04.663 "subsystems": [ 00:07:04.663 { 00:07:04.663 "subsystem": "bdev", 00:07:04.663 "config": [ 00:07:04.663 { 00:07:04.663 "params": { 00:07:04.663 "block_size": 4096, 00:07:04.663 "filename": "dd_sparse_aio_disk", 00:07:04.663 "name": "dd_aio" 00:07:04.663 }, 00:07:04.663 "method": "bdev_aio_create" 00:07:04.663 }, 00:07:04.663 { 00:07:04.663 "method": "bdev_wait_for_examine" 00:07:04.663 } 00:07:04.663 ] 00:07:04.663 } 00:07:04.663 ] 00:07:04.663 } 00:07:04.921 [2024-07-25 10:45:34.501000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.921 [2024-07-25 10:45:34.641936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.181 [2024-07-25 10:45:34.714814] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:05.439  Copying: 12/36 [MB] (average 923 MBps) 00:07:05.439 00:07:05.439 10:45:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:07:05.439 10:45:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:07:05.439 10:45:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:07:05.439 10:45:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:07:05.439 10:45:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:05.439 10:45:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:07:05.699 10:45:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:07:05.699 10:45:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:07:05.699 10:45:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:07:05.699 10:45:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:05.699 00:07:05.699 real 0m0.866s 00:07:05.699 user 0m0.572s 00:07:05.699 sys 0m0.445s 00:07:05.699 10:45:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.699 ************************************ 00:07:05.699 END TEST dd_sparse_bdev_to_file 00:07:05.699 ************************************ 00:07:05.699 10:45:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:05.699 10:45:35 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:07:05.699 10:45:35 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:07:05.699 10:45:35 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:07:05.699 10:45:35 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:07:05.699 10:45:35 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:07:05.699 ************************************ 00:07:05.699 END TEST spdk_dd_sparse 00:07:05.699 ************************************ 00:07:05.699 00:07:05.699 real 0m2.939s 00:07:05.699 user 0m1.802s 00:07:05.699 sys 0m1.563s 00:07:05.699 10:45:35 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.699 10:45:35 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:05.699 10:45:35 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:05.699 10:45:35 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:05.699 10:45:35 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.699 10:45:35 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:05.699 ************************************ 00:07:05.699 START TEST spdk_dd_negative 00:07:05.699 ************************************ 00:07:05.699 10:45:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:05.699 * Looking for test storage... 00:07:05.699 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:05.699 10:45:35 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:05.699 10:45:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.699 10:45:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.699 10:45:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.699 10:45:35 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.699 10:45:35 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.699 10:45:35 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.699 10:45:35 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:07:05.699 10:45:35 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.699 10:45:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:05.699 10:45:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:05.699 10:45:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:05.699 10:45:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:05.699 10:45:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:07:05.699 10:45:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:05.699 10:45:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.699 10:45:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:05.699 ************************************ 00:07:05.699 START TEST dd_invalid_arguments 00:07:05.699 ************************************ 00:07:05.699 10:45:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1125 -- # invalid_arguments 00:07:05.699 10:45:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:05.699 10:45:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # local es=0 00:07:05.699 10:45:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:05.699 10:45:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.699 10:45:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.699 10:45:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.699 10:45:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.699 10:45:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.699 10:45:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.699 10:45:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.699 10:45:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:05.699 10:45:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:05.959 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:07:05.959 00:07:05.959 CPU options: 00:07:05.959 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:05.959 (like [0,1,10]) 00:07:05.959 --lcores lcore to CPU mapping list. The list is in the format: 00:07:05.959 [<,lcores[@CPUs]>...] 00:07:05.959 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:05.959 Within the group, '-' is used for range separator, 00:07:05.959 ',' is used for single number separator. 00:07:05.959 '( )' can be omitted for single element group, 00:07:05.959 '@' can be omitted if cpus and lcores have the same value 00:07:05.959 --disable-cpumask-locks Disable CPU core lock files. 00:07:05.959 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:05.959 pollers in the app support interrupt mode) 00:07:05.959 -p, --main-core main (primary) core for DPDK 00:07:05.959 00:07:05.959 Configuration options: 00:07:05.959 -c, --config, --json JSON config file 00:07:05.959 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:05.959 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:05.959 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:05.959 --rpcs-allowed comma-separated list of permitted RPCS 00:07:05.959 --json-ignore-init-errors don't exit on invalid config entry 00:07:05.959 00:07:05.959 Memory options: 00:07:05.959 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:05.959 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:05.959 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:05.959 -R, --huge-unlink unlink huge files after initialization 00:07:05.959 -n, --mem-channels number of memory channels used for DPDK 00:07:05.959 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:05.959 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:05.959 --no-huge run without using hugepages 00:07:05.959 -i, --shm-id shared memory ID (optional) 00:07:05.959 -g, --single-file-segments force creating just one hugetlbfs file 00:07:05.959 00:07:05.959 PCI options: 00:07:05.959 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:05.959 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:05.959 -u, --no-pci disable PCI access 00:07:05.959 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:05.959 00:07:05.959 Log options: 00:07:05.959 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:07:05.959 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:07:05.959 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:07:05.959 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:07:05.959 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:07:05.959 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:07:05.959 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:07:05.959 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:07:05.959 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:07:05.959 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:07:05.959 virtio_vfio_user, vmd) 00:07:05.959 --silence-noticelog disable notice level logging to stderr 00:07:05.959 00:07:05.959 Trace options: 00:07:05.959 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:05.959 setting 0 to disable trace (default 32768) 00:07:05.959 Tracepoints vary in size and can use more than one trace entry. 00:07:05.959 -e, --tpoint-group [:] 00:07:05.959 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:07:05.959 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:07:05.959 [2024-07-25 10:45:35.461681] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:07:05.959 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:07:05.959 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:05.959 a tracepoint group. First tpoint inside a group can be enabled by 00:07:05.959 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:05.959 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:05.959 in /include/spdk_internal/trace_defs.h 00:07:05.959 00:07:05.959 Other options: 00:07:05.959 -h, --help show this usage 00:07:05.959 -v, --version print SPDK version 00:07:05.959 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:05.959 --env-context Opaque context for use of the env implementation 00:07:05.959 00:07:05.959 Application specific: 00:07:05.959 [--------- DD Options ---------] 00:07:05.959 --if Input file. Must specify either --if or --ib. 00:07:05.959 --ib Input bdev. Must specifier either --if or --ib 00:07:05.959 --of Output file. Must specify either --of or --ob. 00:07:05.959 --ob Output bdev. Must specify either --of or --ob. 00:07:05.959 --iflag Input file flags. 00:07:05.959 --oflag Output file flags. 00:07:05.959 --bs I/O unit size (default: 4096) 00:07:05.959 --qd Queue depth (default: 2) 00:07:05.959 --count I/O unit count. The number of I/O units to copy. (default: all) 00:07:05.959 --skip Skip this many I/O units at start of input. (default: 0) 00:07:05.959 --seek Skip this many I/O units at start of output. (default: 0) 00:07:05.959 --aio Force usage of AIO. (by default io_uring is used if available) 00:07:05.959 --sparse Enable hole skipping in input target 00:07:05.959 Available iflag and oflag values: 00:07:05.959 append - append mode 00:07:05.959 direct - use direct I/O for data 00:07:05.959 directory - fail unless a directory 00:07:05.959 dsync - use synchronized I/O for data 00:07:05.959 noatime - do not update access time 00:07:05.959 noctty - do not assign controlling terminal from file 00:07:05.959 nofollow - do not follow symlinks 00:07:05.959 nonblock - use non-blocking I/O 00:07:05.959 sync - use synchronized I/O for data and metadata 00:07:05.959 10:45:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # es=2 00:07:05.959 10:45:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:05.960 10:45:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:05.960 10:45:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:05.960 00:07:05.960 real 0m0.079s 00:07:05.960 ************************************ 00:07:05.960 END TEST dd_invalid_arguments 00:07:05.960 ************************************ 00:07:05.960 user 0m0.046s 00:07:05.960 sys 0m0.030s 00:07:05.960 10:45:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.960 10:45:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:07:05.960 10:45:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:07:05.960 10:45:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:05.960 10:45:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.960 10:45:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:05.960 ************************************ 00:07:05.960 START TEST dd_double_input 00:07:05.960 ************************************ 00:07:05.960 10:45:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1125 -- # double_input 00:07:05.960 10:45:35 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:05.960 10:45:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # local es=0 00:07:05.960 10:45:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:05.960 10:45:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.960 10:45:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.960 10:45:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.960 10:45:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.960 10:45:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.960 10:45:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.960 10:45:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.960 10:45:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:05.960 10:45:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:05.960 [2024-07-25 10:45:35.595461] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:07:05.960 10:45:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # es=22 00:07:05.960 10:45:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:05.960 10:45:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:05.960 ************************************ 00:07:05.960 END TEST dd_double_input 00:07:05.960 ************************************ 00:07:05.960 10:45:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:05.960 00:07:05.960 real 0m0.081s 00:07:05.960 user 0m0.044s 00:07:05.960 sys 0m0.036s 00:07:05.960 10:45:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.960 10:45:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:07:05.960 10:45:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:07:05.960 10:45:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:05.960 10:45:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.960 10:45:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:05.960 ************************************ 00:07:05.960 START TEST dd_double_output 00:07:05.960 ************************************ 00:07:05.960 10:45:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1125 -- # double_output 00:07:05.960 10:45:35 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:05.960 10:45:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # local es=0 00:07:05.960 10:45:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:05.960 10:45:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.960 10:45:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.960 10:45:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.960 10:45:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.960 10:45:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.960 10:45:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.960 10:45:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.960 10:45:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:05.960 10:45:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:06.219 [2024-07-25 10:45:35.718240] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:07:06.219 10:45:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # es=22 00:07:06.219 10:45:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:06.219 10:45:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:06.219 10:45:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:06.219 00:07:06.219 real 0m0.064s 00:07:06.219 user 0m0.037s 00:07:06.219 sys 0m0.026s 00:07:06.219 10:45:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.219 10:45:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:07:06.219 ************************************ 00:07:06.219 END TEST dd_double_output 00:07:06.219 ************************************ 00:07:06.219 10:45:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:07:06.219 10:45:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.219 10:45:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.219 10:45:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:06.219 ************************************ 00:07:06.219 START TEST dd_no_input 00:07:06.219 ************************************ 00:07:06.219 10:45:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1125 -- # no_input 00:07:06.219 10:45:35 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:06.219 10:45:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # local es=0 00:07:06.219 10:45:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:06.219 10:45:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.219 10:45:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.219 10:45:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.219 10:45:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.219 10:45:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.219 10:45:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.219 10:45:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.219 10:45:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:06.219 10:45:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:06.219 [2024-07-25 10:45:35.845052] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:07:06.219 10:45:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # es=22 00:07:06.219 10:45:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:06.219 10:45:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:06.219 10:45:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:06.219 00:07:06.219 real 0m0.079s 00:07:06.219 user 0m0.049s 00:07:06.219 sys 0m0.029s 00:07:06.219 10:45:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.219 ************************************ 00:07:06.219 END TEST dd_no_input 00:07:06.219 ************************************ 00:07:06.219 10:45:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:07:06.219 10:45:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:07:06.219 10:45:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.219 10:45:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.219 10:45:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:06.219 ************************************ 00:07:06.219 START TEST dd_no_output 00:07:06.219 ************************************ 00:07:06.219 10:45:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1125 -- # no_output 00:07:06.219 10:45:35 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:06.219 10:45:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # local es=0 00:07:06.220 10:45:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:06.220 10:45:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.220 10:45:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.220 10:45:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.220 10:45:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.220 10:45:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.220 10:45:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.220 10:45:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.220 10:45:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:06.220 10:45:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:06.478 [2024-07-25 10:45:35.972482] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:07:06.478 10:45:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # es=22 00:07:06.478 10:45:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:06.478 10:45:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:06.478 10:45:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:06.478 00:07:06.478 real 0m0.073s 00:07:06.478 user 0m0.046s 00:07:06.478 sys 0m0.025s 00:07:06.478 10:45:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.478 10:45:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:07:06.478 ************************************ 00:07:06.478 END TEST dd_no_output 00:07:06.478 ************************************ 00:07:06.478 10:45:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:07:06.478 10:45:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.478 10:45:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.478 10:45:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:06.478 ************************************ 00:07:06.478 START TEST dd_wrong_blocksize 00:07:06.478 ************************************ 00:07:06.478 10:45:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1125 -- # wrong_blocksize 00:07:06.478 10:45:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:06.478 10:45:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:07:06.478 10:45:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:06.478 10:45:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.478 10:45:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.478 10:45:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.478 10:45:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.478 10:45:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.479 10:45:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.479 10:45:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.479 10:45:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:06.479 10:45:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:06.479 [2024-07-25 10:45:36.108598] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:07:06.479 10:45:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # es=22 00:07:06.479 10:45:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:06.479 10:45:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:06.479 10:45:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:06.479 00:07:06.479 real 0m0.091s 00:07:06.479 user 0m0.052s 00:07:06.479 sys 0m0.038s 00:07:06.479 10:45:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.479 10:45:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:06.479 ************************************ 00:07:06.479 END TEST dd_wrong_blocksize 00:07:06.479 ************************************ 00:07:06.479 10:45:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:07:06.479 10:45:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.479 10:45:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.479 10:45:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:06.479 ************************************ 00:07:06.479 START TEST dd_smaller_blocksize 00:07:06.479 ************************************ 00:07:06.479 10:45:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1125 -- # smaller_blocksize 00:07:06.479 10:45:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:06.479 10:45:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:07:06.479 10:45:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:06.479 10:45:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.479 10:45:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.479 10:45:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.479 10:45:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.479 10:45:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.479 10:45:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.479 10:45:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.479 10:45:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:06.479 10:45:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:06.737 [2024-07-25 10:45:36.237775] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:06.737 [2024-07-25 10:45:36.237890] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63865 ] 00:07:06.737 [2024-07-25 10:45:36.374568] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.013 [2024-07-25 10:45:36.532551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.013 [2024-07-25 10:45:36.609626] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:07.272 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:07.272 [2024-07-25 10:45:36.937097] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:07:07.272 [2024-07-25 10:45:36.937169] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:07.530 [2024-07-25 10:45:37.106518] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:07.530 10:45:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # es=244 00:07:07.530 10:45:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:07.530 10:45:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@662 -- # es=116 00:07:07.788 10:45:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # case "$es" in 00:07:07.788 10:45:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@670 -- # es=1 00:07:07.788 10:45:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:07.788 00:07:07.788 real 0m1.081s 00:07:07.788 user 0m0.563s 00:07:07.788 sys 0m0.411s 00:07:07.788 10:45:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.788 10:45:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:07.788 ************************************ 00:07:07.788 END TEST dd_smaller_blocksize 00:07:07.788 ************************************ 00:07:07.788 10:45:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:07:07.788 10:45:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.788 10:45:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.788 10:45:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:07.788 ************************************ 00:07:07.788 START TEST dd_invalid_count 00:07:07.788 ************************************ 00:07:07.788 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1125 -- # invalid_count 00:07:07.788 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:07.788 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # local es=0 00:07:07.788 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:07.789 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.789 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.789 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.789 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.789 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.789 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.789 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.789 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:07.789 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:07.789 [2024-07-25 10:45:37.370623] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:07:07.789 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # es=22 00:07:07.789 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:07.789 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:07.789 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:07.789 00:07:07.789 real 0m0.063s 00:07:07.789 user 0m0.039s 00:07:07.789 sys 0m0.023s 00:07:07.789 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.789 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:07:07.789 ************************************ 00:07:07.789 END TEST dd_invalid_count 00:07:07.789 ************************************ 00:07:07.789 10:45:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:07:07.789 10:45:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.789 10:45:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.789 10:45:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:07.789 ************************************ 00:07:07.789 START TEST dd_invalid_oflag 00:07:07.789 ************************************ 00:07:07.789 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1125 -- # invalid_oflag 00:07:07.789 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:07.789 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # local es=0 00:07:07.789 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:07.789 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.789 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.789 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.789 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.789 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.789 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.789 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.789 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:07.789 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:07.789 [2024-07-25 10:45:37.489127] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:07:07.789 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # es=22 00:07:07.789 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:07.789 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:07.789 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:07.789 00:07:07.789 real 0m0.066s 00:07:07.789 user 0m0.044s 00:07:07.789 sys 0m0.022s 00:07:07.789 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.789 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:07:07.789 ************************************ 00:07:07.789 END TEST dd_invalid_oflag 00:07:07.789 ************************************ 00:07:08.047 10:45:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:07:08.047 10:45:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:08.047 10:45:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.047 10:45:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:08.047 ************************************ 00:07:08.047 START TEST dd_invalid_iflag 00:07:08.047 ************************************ 00:07:08.047 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1125 -- # invalid_iflag 00:07:08.047 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:08.047 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # local es=0 00:07:08.047 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:08.047 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.047 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.047 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.047 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.047 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.047 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.047 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.047 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:08.047 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:08.047 [2024-07-25 10:45:37.606881] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:07:08.047 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # es=22 00:07:08.047 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:08.047 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:08.047 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:08.047 00:07:08.047 real 0m0.066s 00:07:08.047 user 0m0.044s 00:07:08.047 sys 0m0.021s 00:07:08.047 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.047 10:45:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:07:08.047 ************************************ 00:07:08.047 END TEST dd_invalid_iflag 00:07:08.047 ************************************ 00:07:08.047 10:45:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:07:08.047 10:45:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:08.047 10:45:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.047 10:45:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:08.047 ************************************ 00:07:08.047 START TEST dd_unknown_flag 00:07:08.047 ************************************ 00:07:08.047 10:45:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1125 -- # unknown_flag 00:07:08.047 10:45:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:08.047 10:45:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # local es=0 00:07:08.047 10:45:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:08.047 10:45:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.047 10:45:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.047 10:45:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.048 10:45:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.048 10:45:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.048 10:45:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.048 10:45:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.048 10:45:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:08.048 10:45:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:08.048 [2024-07-25 10:45:37.724966] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:08.048 [2024-07-25 10:45:37.725055] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63961 ] 00:07:08.306 [2024-07-25 10:45:37.863867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.306 [2024-07-25 10:45:37.976826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.564 [2024-07-25 10:45:38.051942] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:08.564 [2024-07-25 10:45:38.094233] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:08.564 [2024-07-25 10:45:38.094305] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:08.564 [2024-07-25 10:45:38.094365] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:08.564 [2024-07-25 10:45:38.094380] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:08.564 [2024-07-25 10:45:38.094625] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:07:08.564 [2024-07-25 10:45:38.094649] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:08.564 [2024-07-25 10:45:38.094703] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:08.564 [2024-07-25 10:45:38.094715] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:08.564 [2024-07-25 10:45:38.255736] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:08.823 10:45:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # es=234 00:07:08.823 10:45:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:08.823 10:45:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@662 -- # es=106 00:07:08.823 10:45:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # case "$es" in 00:07:08.823 10:45:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@670 -- # es=1 00:07:08.823 10:45:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:08.823 00:07:08.823 real 0m0.731s 00:07:08.823 user 0m0.432s 00:07:08.823 sys 0m0.202s 00:07:08.823 10:45:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.823 10:45:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:07:08.823 ************************************ 00:07:08.823 END TEST dd_unknown_flag 00:07:08.823 ************************************ 00:07:08.823 10:45:38 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:07:08.823 10:45:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:08.823 10:45:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.823 10:45:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:08.823 ************************************ 00:07:08.823 START TEST dd_invalid_json 00:07:08.823 ************************************ 00:07:08.823 10:45:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1125 -- # invalid_json 00:07:08.823 10:45:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:08.823 10:45:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # local es=0 00:07:08.823 10:45:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:08.823 10:45:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:07:08.823 10:45:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.823 10:45:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.823 10:45:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.823 10:45:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.823 10:45:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.823 10:45:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.823 10:45:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.823 10:45:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:08.823 10:45:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:08.823 [2024-07-25 10:45:38.506347] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:08.823 [2024-07-25 10:45:38.506445] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63991 ] 00:07:09.081 [2024-07-25 10:45:38.644053] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.081 [2024-07-25 10:45:38.795822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.081 [2024-07-25 10:45:38.795920] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:07:09.082 [2024-07-25 10:45:38.795935] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:09.082 [2024-07-25 10:45:38.795944] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:09.082 [2024-07-25 10:45:38.795982] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:09.357 10:45:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # es=234 00:07:09.357 10:45:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:09.357 10:45:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@662 -- # es=106 00:07:09.357 10:45:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # case "$es" in 00:07:09.357 10:45:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@670 -- # es=1 00:07:09.357 10:45:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:09.357 00:07:09.357 real 0m0.523s 00:07:09.357 user 0m0.336s 00:07:09.357 sys 0m0.084s 00:07:09.357 10:45:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.357 ************************************ 00:07:09.357 END TEST dd_invalid_json 00:07:09.357 10:45:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:07:09.357 ************************************ 00:07:09.357 00:07:09.357 real 0m3.707s 00:07:09.357 user 0m1.948s 00:07:09.357 sys 0m1.397s 00:07:09.357 10:45:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.357 10:45:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:09.357 ************************************ 00:07:09.357 END TEST spdk_dd_negative 00:07:09.357 ************************************ 00:07:09.357 00:07:09.357 real 1m31.566s 00:07:09.357 user 1m0.278s 00:07:09.357 sys 0m39.909s 00:07:09.357 10:45:39 spdk_dd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.357 10:45:39 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:09.357 ************************************ 00:07:09.357 END TEST spdk_dd 00:07:09.357 ************************************ 00:07:09.645 10:45:39 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:07:09.645 10:45:39 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:07:09.645 10:45:39 -- spdk/autotest.sh@264 -- # timing_exit lib 00:07:09.645 10:45:39 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:09.645 10:45:39 -- common/autotest_common.sh@10 -- # set +x 00:07:09.645 10:45:39 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:07:09.645 10:45:39 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:07:09.645 10:45:39 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:07:09.645 10:45:39 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:07:09.645 10:45:39 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:07:09.645 10:45:39 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:07:09.645 10:45:39 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:09.645 10:45:39 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:09.645 10:45:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.645 10:45:39 -- common/autotest_common.sh@10 -- # set +x 00:07:09.645 ************************************ 00:07:09.645 START TEST nvmf_tcp 00:07:09.645 ************************************ 00:07:09.645 10:45:39 nvmf_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:09.645 * Looking for test storage... 00:07:09.645 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:09.645 10:45:39 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:09.645 10:45:39 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:09.645 10:45:39 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:09.645 10:45:39 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:09.645 10:45:39 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.645 10:45:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:09.645 ************************************ 00:07:09.645 START TEST nvmf_target_core 00:07:09.645 ************************************ 00:07:09.645 10:45:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:09.645 * Looking for test storage... 00:07:09.645 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:09.645 10:45:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:09.645 10:45:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:09.645 10:45:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:09.645 10:45:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:09.645 10:45:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:09.645 10:45:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:09.645 10:45:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:09.645 10:45:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:09.645 10:45:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:09.645 10:45:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:09.645 10:45:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:09.645 10:45:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:09.645 10:45:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:09.645 10:45:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:09.645 10:45:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:07:09.645 10:45:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=bb4b8bd3-cfb4-4368-bf29-91254747069c 00:07:09.645 10:45:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:09.645 10:45:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:09.645 10:45:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:09.645 10:45:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:09.645 10:45:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:09.645 10:45:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:09.645 10:45:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:09.645 10:45:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:09.645 10:45:39 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.645 10:45:39 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.645 10:45:39 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.645 10:45:39 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:09.645 10:45:39 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.645 10:45:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:07:09.645 10:45:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:09.645 10:45:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:09.645 10:45:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:09.645 10:45:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:09.645 10:45:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:09.645 10:45:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:09.645 10:45:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:09.645 10:45:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:09.645 10:45:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:09.646 10:45:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:09.646 10:45:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:07:09.646 10:45:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:09.646 10:45:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:09.646 10:45:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.646 10:45:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:09.646 ************************************ 00:07:09.646 START TEST nvmf_host_management 00:07:09.646 ************************************ 00:07:09.646 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:09.905 * Looking for test storage... 00:07:09.905 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:09.905 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:09.905 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:09.905 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:09.905 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:09.905 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:09.905 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:09.905 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:09.905 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:09.905 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:09.905 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:09.905 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:09.905 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:09.905 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:07:09.905 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=bb4b8bd3-cfb4-4368-bf29-91254747069c 00:07:09.905 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:09.905 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:09.905 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:09.905 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:09.905 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:09.905 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:09.905 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:09.905 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:09.905 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.905 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.905 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:09.906 Cannot find device "nvmf_init_br" 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # true 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:09.906 Cannot find device "nvmf_tgt_br" 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:09.906 Cannot find device "nvmf_tgt_br2" 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:09.906 Cannot find device "nvmf_init_br" 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # true 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:09.906 Cannot find device "nvmf_tgt_br" 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:09.906 Cannot find device "nvmf_tgt_br2" 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:09.906 Cannot find device "nvmf_br" 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # true 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:09.906 Cannot find device "nvmf_init_if" 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@161 -- # true 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:09.906 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:09.906 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:09.906 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:10.165 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:10.165 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:10.165 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:10.165 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:10.165 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:10.165 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:10.165 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:10.165 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:10.165 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:10.165 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:10.165 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:10.165 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:10.165 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:10.165 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:10.165 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:10.165 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:10.166 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:10.166 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:10.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:10.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:07:10.166 00:07:10.166 --- 10.0.0.2 ping statistics --- 00:07:10.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:10.166 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:07:10.166 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:10.166 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:10.166 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:07:10.166 00:07:10.166 --- 10.0.0.3 ping statistics --- 00:07:10.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:10.166 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:07:10.166 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:10.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:10.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:07:10.166 00:07:10.166 --- 10.0.0.1 ping statistics --- 00:07:10.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:10.166 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:07:10.166 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:10.166 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:07:10.166 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:10.166 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:10.166 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:10.166 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:10.166 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:10.166 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:10.166 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:10.166 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:10.166 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:10.166 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:10.166 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:10.166 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:10.166 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:10.166 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=64279 00:07:10.166 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 64279 00:07:10.166 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 64279 ']' 00:07:10.166 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:10.166 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.166 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:10.166 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.166 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:10.166 10:45:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:10.425 [2024-07-25 10:45:39.914765] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:10.425 [2024-07-25 10:45:39.914923] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:10.425 [2024-07-25 10:45:40.059738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:10.684 [2024-07-25 10:45:40.233573] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:10.684 [2024-07-25 10:45:40.233651] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:10.684 [2024-07-25 10:45:40.233677] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:10.684 [2024-07-25 10:45:40.233688] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:10.684 [2024-07-25 10:45:40.233697] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:10.684 [2024-07-25 10:45:40.233897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.684 [2024-07-25 10:45:40.234032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:10.684 [2024-07-25 10:45:40.234204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:10.684 [2024-07-25 10:45:40.234210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.684 [2024-07-25 10:45:40.313997] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:11.251 10:45:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:11.252 10:45:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:11.252 10:45:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:11.252 10:45:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:11.252 10:45:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:11.252 10:45:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:11.252 10:45:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:11.252 10:45:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.252 10:45:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:11.252 [2024-07-25 10:45:40.907208] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:11.252 10:45:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.252 10:45:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:11.252 10:45:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:11.252 10:45:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:11.252 10:45:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:11.252 10:45:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:11.252 10:45:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:11.252 10:45:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.252 10:45:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:11.252 Malloc0 00:07:11.510 [2024-07-25 10:45:41.000818] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:11.510 10:45:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.510 10:45:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:11.510 10:45:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:11.510 10:45:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:11.510 10:45:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=64343 00:07:11.510 10:45:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 64343 /var/tmp/bdevperf.sock 00:07:11.510 10:45:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 64343 ']' 00:07:11.510 10:45:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:11.510 10:45:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:11.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:11.510 10:45:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:11.510 10:45:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:11.510 10:45:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:11.510 10:45:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:11.510 10:45:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:11.510 10:45:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:11.510 10:45:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:11.510 10:45:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:11.510 10:45:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:11.510 { 00:07:11.510 "params": { 00:07:11.510 "name": "Nvme$subsystem", 00:07:11.510 "trtype": "$TEST_TRANSPORT", 00:07:11.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:11.510 "adrfam": "ipv4", 00:07:11.510 "trsvcid": "$NVMF_PORT", 00:07:11.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:11.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:11.510 "hdgst": ${hdgst:-false}, 00:07:11.510 "ddgst": ${ddgst:-false} 00:07:11.510 }, 00:07:11.510 "method": "bdev_nvme_attach_controller" 00:07:11.510 } 00:07:11.510 EOF 00:07:11.510 )") 00:07:11.510 10:45:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:11.510 10:45:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:11.510 10:45:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:11.510 10:45:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:11.510 "params": { 00:07:11.510 "name": "Nvme0", 00:07:11.510 "trtype": "tcp", 00:07:11.510 "traddr": "10.0.0.2", 00:07:11.510 "adrfam": "ipv4", 00:07:11.510 "trsvcid": "4420", 00:07:11.510 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:11.510 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:11.510 "hdgst": false, 00:07:11.510 "ddgst": false 00:07:11.510 }, 00:07:11.510 "method": "bdev_nvme_attach_controller" 00:07:11.510 }' 00:07:11.510 [2024-07-25 10:45:41.103443] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:11.510 [2024-07-25 10:45:41.104193] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64343 ] 00:07:11.510 [2024-07-25 10:45:41.242037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.769 [2024-07-25 10:45:41.399108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.769 [2024-07-25 10:45:41.483301] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:12.028 Running I/O for 10 seconds... 00:07:12.597 10:45:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:12.597 10:45:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:12.597 10:45:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:12.597 10:45:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.597 10:45:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:12.597 10:45:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.597 10:45:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:12.597 10:45:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:12.597 10:45:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:12.597 10:45:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:12.597 10:45:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:12.597 10:45:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:12.597 10:45:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:12.597 10:45:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:12.597 10:45:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:12.597 10:45:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:12.597 10:45:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.597 10:45:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:12.597 10:45:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.598 10:45:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=771 00:07:12.598 10:45:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 771 -ge 100 ']' 00:07:12.598 10:45:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:12.598 10:45:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:12.598 10:45:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:12.598 10:45:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:12.598 10:45:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.598 10:45:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:12.598 10:45:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.598 10:45:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:12.598 10:45:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.598 10:45:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:12.598 10:45:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.598 10:45:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:12.598 [2024-07-25 10:45:42.230059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.598 [2024-07-25 10:45:42.230111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.598 [2024-07-25 10:45:42.230139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.598 [2024-07-25 10:45:42.230161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.598 [2024-07-25 10:45:42.230175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.598 [2024-07-25 10:45:42.230185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.598 [2024-07-25 10:45:42.230196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.598 [2024-07-25 10:45:42.230206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.598 [2024-07-25 10:45:42.230217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.598 [2024-07-25 10:45:42.230227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.598 [2024-07-25 10:45:42.230238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.598 [2024-07-25 10:45:42.230248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.598 [2024-07-25 10:45:42.230259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.598 [2024-07-25 10:45:42.230269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.598 [2024-07-25 10:45:42.230289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.598 [2024-07-25 10:45:42.230298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.598 [2024-07-25 10:45:42.230311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.598 [2024-07-25 10:45:42.230320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.598 [2024-07-25 10:45:42.230331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.598 [2024-07-25 10:45:42.230341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.598 [2024-07-25 10:45:42.230352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.598 [2024-07-25 10:45:42.230361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.598 [2024-07-25 10:45:42.230372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.598 [2024-07-25 10:45:42.230381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.598 [2024-07-25 10:45:42.230392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.598 [2024-07-25 10:45:42.230401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.598 [2024-07-25 10:45:42.230412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.598 [2024-07-25 10:45:42.230422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.598 [2024-07-25 10:45:42.230432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.598 [2024-07-25 10:45:42.230442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.599 [2024-07-25 10:45:42.230453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.599 [2024-07-25 10:45:42.230462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.599 [2024-07-25 10:45:42.230481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.599 [2024-07-25 10:45:42.230491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.599 [2024-07-25 10:45:42.230503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.599 [2024-07-25 10:45:42.230513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.599 [2024-07-25 10:45:42.230524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.599 [2024-07-25 10:45:42.230533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.599 [2024-07-25 10:45:42.230544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.599 [2024-07-25 10:45:42.230553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.599 [2024-07-25 10:45:42.230564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.599 [2024-07-25 10:45:42.230573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.599 [2024-07-25 10:45:42.230584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.599 [2024-07-25 10:45:42.230593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.599 [2024-07-25 10:45:42.230604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.599 [2024-07-25 10:45:42.230613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.599 [2024-07-25 10:45:42.230624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.599 [2024-07-25 10:45:42.230633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.599 [2024-07-25 10:45:42.230644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.599 [2024-07-25 10:45:42.230653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.599 [2024-07-25 10:45:42.230665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:117888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.599 [2024-07-25 10:45:42.230674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.599 [2024-07-25 10:45:42.230685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.599 [2024-07-25 10:45:42.230701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.599 [2024-07-25 10:45:42.230712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.599 [2024-07-25 10:45:42.230721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.599 [2024-07-25 10:45:42.230732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:118272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.599 [2024-07-25 10:45:42.230744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.599 [2024-07-25 10:45:42.230755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:118400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.599 [2024-07-25 10:45:42.230768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.599 [2024-07-25 10:45:42.230778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:118528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.599 [2024-07-25 10:45:42.230787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.599 [2024-07-25 10:45:42.230799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:118656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.599 [2024-07-25 10:45:42.230808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.599 [2024-07-25 10:45:42.230823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:118784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.599 [2024-07-25 10:45:42.230833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.599 [2024-07-25 10:45:42.230845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:118912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.599 [2024-07-25 10:45:42.230866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.599 [2024-07-25 10:45:42.230879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.599 [2024-07-25 10:45:42.230889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.599 [2024-07-25 10:45:42.230900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:119168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.599 [2024-07-25 10:45:42.230909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.599 [2024-07-25 10:45:42.230921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.599 [2024-07-25 10:45:42.230930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.599 [2024-07-25 10:45:42.230942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.599 [2024-07-25 10:45:42.230951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.599 [2024-07-25 10:45:42.230962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:119552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.599 [2024-07-25 10:45:42.230971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.600 [2024-07-25 10:45:42.230982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:119680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.600 [2024-07-25 10:45:42.230991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.600 [2024-07-25 10:45:42.231002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:119808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.600 [2024-07-25 10:45:42.231011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.600 [2024-07-25 10:45:42.231022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:119936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.600 [2024-07-25 10:45:42.231031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.600 [2024-07-25 10:45:42.231042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.600 [2024-07-25 10:45:42.231051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.600 [2024-07-25 10:45:42.231062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.600 [2024-07-25 10:45:42.231071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.600 [2024-07-25 10:45:42.231082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.600 [2024-07-25 10:45:42.231091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.600 [2024-07-25 10:45:42.231102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.600 [2024-07-25 10:45:42.231111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.600 [2024-07-25 10:45:42.231122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.600 [2024-07-25 10:45:42.231131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.600 [2024-07-25 10:45:42.231143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.600 [2024-07-25 10:45:42.231151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.600 [2024-07-25 10:45:42.231168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.600 [2024-07-25 10:45:42.231177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.600 [2024-07-25 10:45:42.231189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.600 [2024-07-25 10:45:42.231198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.600 [2024-07-25 10:45:42.231210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.600 [2024-07-25 10:45:42.231219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.600 [2024-07-25 10:45:42.231230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.600 [2024-07-25 10:45:42.231239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.600 [2024-07-25 10:45:42.231251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.600 [2024-07-25 10:45:42.231261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.600 [2024-07-25 10:45:42.231272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.600 [2024-07-25 10:45:42.231281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.600 [2024-07-25 10:45:42.231293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.600 [2024-07-25 10:45:42.231302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.600 [2024-07-25 10:45:42.231313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.600 [2024-07-25 10:45:42.231322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.600 [2024-07-25 10:45:42.231333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:121856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.600 [2024-07-25 10:45:42.231342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.600 [2024-07-25 10:45:42.231354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:121984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.600 [2024-07-25 10:45:42.231363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.600 [2024-07-25 10:45:42.231374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.600 [2024-07-25 10:45:42.231383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.600 [2024-07-25 10:45:42.231394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:122240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.601 [2024-07-25 10:45:42.231403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.601 [2024-07-25 10:45:42.231415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:122368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.601 [2024-07-25 10:45:42.231424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.601 [2024-07-25 10:45:42.231435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:122496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.601 [2024-07-25 10:45:42.231444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.601 [2024-07-25 10:45:42.231456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.601 [2024-07-25 10:45:42.231465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.601 [2024-07-25 10:45:42.231476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:122752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.601 [2024-07-25 10:45:42.231485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.601 [2024-07-25 10:45:42.231501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61dec0 is same with the state(5) to be set 00:07:12.601 [2024-07-25 10:45:42.231585] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61dec0 was disconnected and freed. reset controller. 00:07:12.601 [2024-07-25 10:45:42.231720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.601 [2024-07-25 10:45:42.231747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.601 [2024-07-25 10:45:42.231760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.601 [2024-07-25 10:45:42.231777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.601 [2024-07-25 10:45:42.231788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.601 [2024-07-25 10:45:42.231798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.601 [2024-07-25 10:45:42.231808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.601 [2024-07-25 10:45:42.231817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.601 [2024-07-25 10:45:42.231826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615d50 is same with the state(5) to be set 00:07:12.601 [2024-07-25 10:45:42.232928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:12.601 task offset: 114688 on job bdev=Nvme0n1 fails 00:07:12.601 00:07:12.601 Latency(us) 00:07:12.601 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:12.601 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:12.601 Job: Nvme0n1 ended in about 0.62 seconds with error 00:07:12.601 Verification LBA range: start 0x0 length 0x400 00:07:12.601 Nvme0n1 : 0.62 1450.31 90.64 103.59 0.00 40088.07 2204.39 38844.97 00:07:12.601 =================================================================================================================== 00:07:12.601 Total : 1450.31 90.64 103.59 0.00 40088.07 2204.39 38844.97 00:07:12.601 [2024-07-25 10:45:42.234976] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:12.601 [2024-07-25 10:45:42.235006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615d50 (9): Bad file descriptor 00:07:12.601 [2024-07-25 10:45:42.243208] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:13.537 10:45:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 64343 00:07:13.537 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (64343) - No such process 00:07:13.537 10:45:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:13.537 10:45:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:13.537 10:45:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:13.537 10:45:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:13.537 10:45:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:13.537 10:45:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:13.537 10:45:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:13.537 10:45:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:13.537 { 00:07:13.537 "params": { 00:07:13.537 "name": "Nvme$subsystem", 00:07:13.537 "trtype": "$TEST_TRANSPORT", 00:07:13.537 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:13.537 "adrfam": "ipv4", 00:07:13.537 "trsvcid": "$NVMF_PORT", 00:07:13.537 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:13.537 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:13.537 "hdgst": ${hdgst:-false}, 00:07:13.537 "ddgst": ${ddgst:-false} 00:07:13.537 }, 00:07:13.537 "method": "bdev_nvme_attach_controller" 00:07:13.537 } 00:07:13.537 EOF 00:07:13.537 )") 00:07:13.537 10:45:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:13.537 10:45:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:13.537 10:45:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:13.537 10:45:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:13.537 "params": { 00:07:13.537 "name": "Nvme0", 00:07:13.537 "trtype": "tcp", 00:07:13.537 "traddr": "10.0.0.2", 00:07:13.537 "adrfam": "ipv4", 00:07:13.537 "trsvcid": "4420", 00:07:13.537 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:13.537 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:13.537 "hdgst": false, 00:07:13.537 "ddgst": false 00:07:13.537 }, 00:07:13.537 "method": "bdev_nvme_attach_controller" 00:07:13.537 }' 00:07:13.796 [2024-07-25 10:45:43.280633] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:13.796 [2024-07-25 10:45:43.280711] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64386 ] 00:07:13.796 [2024-07-25 10:45:43.414575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.053 [2024-07-25 10:45:43.538367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.053 [2024-07-25 10:45:43.618470] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:14.053 Running I/O for 1 seconds... 00:07:15.429 00:07:15.429 Latency(us) 00:07:15.429 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:15.429 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:15.429 Verification LBA range: start 0x0 length 0x400 00:07:15.429 Nvme0n1 : 1.03 1550.44 96.90 0.00 0.00 40460.73 4140.68 38844.97 00:07:15.429 =================================================================================================================== 00:07:15.429 Total : 1550.44 96.90 0.00 0.00 40460.73 4140.68 38844.97 00:07:15.429 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:15.429 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:15.429 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:15.429 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:15.429 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:15.429 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:15.429 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:07:15.429 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:15.429 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:07:15.429 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:15.429 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:15.429 rmmod nvme_tcp 00:07:15.429 rmmod nvme_fabrics 00:07:15.688 rmmod nvme_keyring 00:07:15.688 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:15.688 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:07:15.688 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:07:15.688 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 64279 ']' 00:07:15.688 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 64279 00:07:15.688 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 64279 ']' 00:07:15.688 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 64279 00:07:15.688 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:07:15.688 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:15.688 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64279 00:07:15.688 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:15.688 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:15.688 killing process with pid 64279 00:07:15.688 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64279' 00:07:15.688 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 64279 00:07:15.688 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 64279 00:07:15.946 [2024-07-25 10:45:45.525765] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:15.946 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:15.946 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:15.946 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:15.946 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:15.946 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:15.946 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:15.946 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:15.946 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:15.946 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:15.946 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:15.946 00:07:15.946 real 0m6.223s 00:07:15.946 user 0m24.047s 00:07:15.946 sys 0m1.622s 00:07:15.946 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.946 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.946 ************************************ 00:07:15.946 END TEST nvmf_host_management 00:07:15.946 ************************************ 00:07:15.946 10:45:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:15.946 10:45:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:15.946 10:45:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.946 10:45:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:15.946 ************************************ 00:07:15.946 START TEST nvmf_lvol 00:07:15.946 ************************************ 00:07:15.946 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:16.206 * Looking for test storage... 00:07:16.206 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=bb4b8bd3-cfb4-4368-bf29-91254747069c 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:16.206 Cannot find device "nvmf_tgt_br" 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:16.206 Cannot find device "nvmf_tgt_br2" 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:16.206 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:16.207 Cannot find device "nvmf_tgt_br" 00:07:16.207 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:07:16.207 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:16.207 Cannot find device "nvmf_tgt_br2" 00:07:16.207 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:07:16.207 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:16.207 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:16.207 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:16.207 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:16.207 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:07:16.207 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:16.207 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:16.207 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:07:16.207 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:16.207 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:16.207 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:16.207 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:16.207 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:16.207 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:16.207 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:16.207 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:16.466 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:16.466 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:16.466 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:16.466 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:16.466 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:16.466 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:16.466 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:16.466 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:16.466 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:16.466 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:16.466 10:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:16.466 10:45:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:16.466 10:45:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:16.466 10:45:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:16.466 10:45:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:16.466 10:45:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:16.466 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:16.466 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:07:16.466 00:07:16.466 --- 10.0.0.2 ping statistics --- 00:07:16.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.466 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:07:16.466 10:45:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:16.466 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:16.466 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:07:16.466 00:07:16.466 --- 10.0.0.3 ping statistics --- 00:07:16.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.466 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:07:16.466 10:45:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:16.466 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:16.466 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:07:16.466 00:07:16.466 --- 10.0.0.1 ping statistics --- 00:07:16.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.466 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:07:16.466 10:45:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:16.466 10:45:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:07:16.466 10:45:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:16.466 10:45:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:16.466 10:45:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:16.466 10:45:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:16.466 10:45:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:16.466 10:45:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:16.466 10:45:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:16.466 10:45:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:16.466 10:45:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:16.466 10:45:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:16.466 10:45:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:16.466 10:45:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=64594 00:07:16.466 10:45:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:16.466 10:45:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 64594 00:07:16.466 10:45:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 64594 ']' 00:07:16.466 10:45:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.466 10:45:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:16.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.466 10:45:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.466 10:45:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:16.466 10:45:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:16.466 [2024-07-25 10:45:46.123833] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:16.466 [2024-07-25 10:45:46.123928] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:16.724 [2024-07-25 10:45:46.260776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:16.724 [2024-07-25 10:45:46.385482] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:16.724 [2024-07-25 10:45:46.385543] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:16.725 [2024-07-25 10:45:46.385576] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:16.725 [2024-07-25 10:45:46.385587] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:16.725 [2024-07-25 10:45:46.385597] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:16.725 [2024-07-25 10:45:46.386313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.725 [2024-07-25 10:45:46.386440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.725 [2024-07-25 10:45:46.386451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.725 [2024-07-25 10:45:46.459138] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:17.290 10:45:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:17.290 10:45:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:07:17.290 10:45:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:17.290 10:45:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:17.290 10:45:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:17.548 10:45:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:17.548 10:45:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:17.808 [2024-07-25 10:45:47.309289] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:17.808 10:45:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:18.067 10:45:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:18.067 10:45:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:18.325 10:45:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:18.325 10:45:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:18.583 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:18.841 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=fbde7e35-f97b-4bbc-89aa-c11cbdb2dc43 00:07:18.842 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u fbde7e35-f97b-4bbc-89aa-c11cbdb2dc43 lvol 20 00:07:19.100 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=8141919b-6335-4180-8d3e-6942b2a89548 00:07:19.100 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:19.358 10:45:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8141919b-6335-4180-8d3e-6942b2a89548 00:07:19.358 10:45:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:19.616 [2024-07-25 10:45:49.279695] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:19.616 10:45:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:19.875 10:45:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=64675 00:07:19.875 10:45:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:19.875 10:45:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:20.810 10:45:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 8141919b-6335-4180-8d3e-6942b2a89548 MY_SNAPSHOT 00:07:21.382 10:45:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=7b7b2642-2312-48bc-8922-359301aa620c 00:07:21.382 10:45:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 8141919b-6335-4180-8d3e-6942b2a89548 30 00:07:21.642 10:45:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 7b7b2642-2312-48bc-8922-359301aa620c MY_CLONE 00:07:21.901 10:45:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=755dc28c-b4dc-4e37-a436-583385de014b 00:07:21.901 10:45:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 755dc28c-b4dc-4e37-a436-583385de014b 00:07:22.468 10:45:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 64675 00:07:30.580 Initializing NVMe Controllers 00:07:30.580 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:30.580 Controller IO queue size 128, less than required. 00:07:30.580 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:30.580 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:30.580 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:30.580 Initialization complete. Launching workers. 00:07:30.580 ======================================================== 00:07:30.580 Latency(us) 00:07:30.580 Device Information : IOPS MiB/s Average min max 00:07:30.580 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10596.40 41.39 12080.13 1775.46 62536.48 00:07:30.580 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10707.30 41.83 11961.66 3427.57 110108.35 00:07:30.580 ======================================================== 00:07:30.580 Total : 21303.70 83.22 12020.59 1775.46 110108.35 00:07:30.580 00:07:30.580 10:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:30.580 10:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 8141919b-6335-4180-8d3e-6942b2a89548 00:07:30.836 10:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fbde7e35-f97b-4bbc-89aa-c11cbdb2dc43 00:07:31.094 10:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:31.094 10:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:31.094 10:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:31.094 10:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:31.094 10:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:07:31.094 10:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:31.094 10:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:07:31.094 10:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:31.094 10:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:31.094 rmmod nvme_tcp 00:07:31.094 rmmod nvme_fabrics 00:07:31.094 rmmod nvme_keyring 00:07:31.094 10:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:31.094 10:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:07:31.094 10:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:07:31.094 10:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 64594 ']' 00:07:31.094 10:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 64594 00:07:31.094 10:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 64594 ']' 00:07:31.094 10:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 64594 00:07:31.094 10:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:07:31.094 10:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:31.094 10:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64594 00:07:31.094 killing process with pid 64594 00:07:31.094 10:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:31.094 10:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:31.094 10:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64594' 00:07:31.094 10:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 64594 00:07:31.094 10:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 64594 00:07:31.658 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:31.658 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:31.658 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:31.658 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:31.658 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:31.658 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.658 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:31.658 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.658 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:31.658 ************************************ 00:07:31.658 END TEST nvmf_lvol 00:07:31.658 ************************************ 00:07:31.658 00:07:31.658 real 0m15.582s 00:07:31.658 user 1m4.647s 00:07:31.658 sys 0m4.378s 00:07:31.658 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:31.658 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:31.658 10:46:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:31.658 10:46:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:31.658 10:46:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:31.658 10:46:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:31.658 ************************************ 00:07:31.658 START TEST nvmf_lvs_grow 00:07:31.658 ************************************ 00:07:31.658 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:31.658 * Looking for test storage... 00:07:31.659 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=bb4b8bd3-cfb4-4368-bf29-91254747069c 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:31.659 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:31.918 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:31.918 Cannot find device "nvmf_tgt_br" 00:07:31.918 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:07:31.918 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:31.918 Cannot find device "nvmf_tgt_br2" 00:07:31.918 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:07:31.918 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:31.918 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:31.918 Cannot find device "nvmf_tgt_br" 00:07:31.918 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:07:31.918 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:31.918 Cannot find device "nvmf_tgt_br2" 00:07:31.918 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:07:31.918 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:31.918 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:31.918 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:31.918 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:31.918 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:07:31.918 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:31.918 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:31.918 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:07:31.918 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:31.918 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:31.918 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:31.918 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:31.918 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:31.918 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:31.918 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:31.918 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:31.918 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:31.918 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:31.918 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:31.918 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:31.918 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:31.918 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:31.918 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:31.918 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:31.918 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:31.918 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:31.918 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:32.175 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:32.175 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:32.175 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:32.175 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:32.175 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:32.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:32.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:07:32.175 00:07:32.175 --- 10.0.0.2 ping statistics --- 00:07:32.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.175 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:07:32.175 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:32.175 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:32.175 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:07:32.175 00:07:32.175 --- 10.0.0.3 ping statistics --- 00:07:32.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.175 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:07:32.175 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:32.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:32.175 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:07:32.175 00:07:32.175 --- 10.0.0.1 ping statistics --- 00:07:32.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.175 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:07:32.175 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:32.175 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:07:32.175 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:32.175 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:32.175 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:32.175 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:32.175 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:32.175 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:32.175 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:32.175 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:32.175 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:32.175 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:32.175 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:32.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.175 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=64997 00:07:32.175 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 64997 00:07:32.175 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:32.175 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 64997 ']' 00:07:32.175 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.175 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:32.175 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.175 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:32.175 10:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:32.175 [2024-07-25 10:46:01.793836] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:32.175 [2024-07-25 10:46:01.793984] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:32.433 [2024-07-25 10:46:01.932812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.433 [2024-07-25 10:46:02.065185] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:32.433 [2024-07-25 10:46:02.065236] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:32.433 [2024-07-25 10:46:02.065247] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:32.433 [2024-07-25 10:46:02.065255] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:32.433 [2024-07-25 10:46:02.065261] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:32.433 [2024-07-25 10:46:02.065287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.433 [2024-07-25 10:46:02.139127] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:33.368 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:33.368 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:07:33.368 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:33.368 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:33.369 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:33.369 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:33.369 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:33.369 [2024-07-25 10:46:03.104967] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:33.627 10:46:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:33.627 10:46:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:33.627 10:46:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:33.627 10:46:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:33.627 ************************************ 00:07:33.627 START TEST lvs_grow_clean 00:07:33.627 ************************************ 00:07:33.627 10:46:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:07:33.627 10:46:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:33.627 10:46:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:33.627 10:46:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:33.627 10:46:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:33.627 10:46:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:33.627 10:46:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:33.627 10:46:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:33.627 10:46:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:33.627 10:46:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:33.885 10:46:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:33.886 10:46:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:34.143 10:46:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=77b34bef-f81e-4075-8982-4b5f532b3ccf 00:07:34.143 10:46:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77b34bef-f81e-4075-8982-4b5f532b3ccf 00:07:34.143 10:46:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:34.143 10:46:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:34.143 10:46:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:34.143 10:46:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 77b34bef-f81e-4075-8982-4b5f532b3ccf lvol 150 00:07:34.402 10:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=5683737f-faf0-425d-bf25-11e9c3e39b90 00:07:34.402 10:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:34.661 10:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:34.661 [2024-07-25 10:46:04.356688] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:34.661 [2024-07-25 10:46:04.356804] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:34.661 true 00:07:34.661 10:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77b34bef-f81e-4075-8982-4b5f532b3ccf 00:07:34.661 10:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:34.938 10:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:34.938 10:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:35.195 10:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5683737f-faf0-425d-bf25-11e9c3e39b90 00:07:35.454 10:46:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:35.712 [2024-07-25 10:46:05.397362] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:35.712 10:46:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:35.970 10:46:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65079 00:07:35.970 10:46:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:35.970 10:46:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:35.970 10:46:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65079 /var/tmp/bdevperf.sock 00:07:35.970 10:46:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 65079 ']' 00:07:35.970 10:46:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:35.970 10:46:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:35.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:35.970 10:46:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:35.970 10:46:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:35.970 10:46:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:35.970 [2024-07-25 10:46:05.693187] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:35.970 [2024-07-25 10:46:05.693304] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65079 ] 00:07:36.228 [2024-07-25 10:46:05.833687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.228 [2024-07-25 10:46:05.945136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.486 [2024-07-25 10:46:05.997386] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:37.052 10:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:37.052 10:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:07:37.052 10:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:37.310 Nvme0n1 00:07:37.310 10:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:37.568 [ 00:07:37.568 { 00:07:37.568 "name": "Nvme0n1", 00:07:37.568 "aliases": [ 00:07:37.568 "5683737f-faf0-425d-bf25-11e9c3e39b90" 00:07:37.568 ], 00:07:37.568 "product_name": "NVMe disk", 00:07:37.568 "block_size": 4096, 00:07:37.568 "num_blocks": 38912, 00:07:37.568 "uuid": "5683737f-faf0-425d-bf25-11e9c3e39b90", 00:07:37.568 "assigned_rate_limits": { 00:07:37.568 "rw_ios_per_sec": 0, 00:07:37.568 "rw_mbytes_per_sec": 0, 00:07:37.568 "r_mbytes_per_sec": 0, 00:07:37.568 "w_mbytes_per_sec": 0 00:07:37.568 }, 00:07:37.568 "claimed": false, 00:07:37.568 "zoned": false, 00:07:37.568 "supported_io_types": { 00:07:37.568 "read": true, 00:07:37.568 "write": true, 00:07:37.568 "unmap": true, 00:07:37.568 "flush": true, 00:07:37.568 "reset": true, 00:07:37.568 "nvme_admin": true, 00:07:37.568 "nvme_io": true, 00:07:37.568 "nvme_io_md": false, 00:07:37.568 "write_zeroes": true, 00:07:37.568 "zcopy": false, 00:07:37.568 "get_zone_info": false, 00:07:37.568 "zone_management": false, 00:07:37.568 "zone_append": false, 00:07:37.568 "compare": true, 00:07:37.568 "compare_and_write": true, 00:07:37.568 "abort": true, 00:07:37.568 "seek_hole": false, 00:07:37.568 "seek_data": false, 00:07:37.568 "copy": true, 00:07:37.568 "nvme_iov_md": false 00:07:37.568 }, 00:07:37.568 "memory_domains": [ 00:07:37.568 { 00:07:37.568 "dma_device_id": "system", 00:07:37.568 "dma_device_type": 1 00:07:37.568 } 00:07:37.568 ], 00:07:37.568 "driver_specific": { 00:07:37.568 "nvme": [ 00:07:37.568 { 00:07:37.568 "trid": { 00:07:37.568 "trtype": "TCP", 00:07:37.568 "adrfam": "IPv4", 00:07:37.568 "traddr": "10.0.0.2", 00:07:37.568 "trsvcid": "4420", 00:07:37.568 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:37.568 }, 00:07:37.568 "ctrlr_data": { 00:07:37.568 "cntlid": 1, 00:07:37.568 "vendor_id": "0x8086", 00:07:37.568 "model_number": "SPDK bdev Controller", 00:07:37.568 "serial_number": "SPDK0", 00:07:37.568 "firmware_revision": "24.09", 00:07:37.568 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:37.568 "oacs": { 00:07:37.568 "security": 0, 00:07:37.568 "format": 0, 00:07:37.568 "firmware": 0, 00:07:37.568 "ns_manage": 0 00:07:37.568 }, 00:07:37.568 "multi_ctrlr": true, 00:07:37.568 "ana_reporting": false 00:07:37.568 }, 00:07:37.568 "vs": { 00:07:37.568 "nvme_version": "1.3" 00:07:37.568 }, 00:07:37.568 "ns_data": { 00:07:37.568 "id": 1, 00:07:37.568 "can_share": true 00:07:37.568 } 00:07:37.568 } 00:07:37.568 ], 00:07:37.568 "mp_policy": "active_passive" 00:07:37.568 } 00:07:37.568 } 00:07:37.568 ] 00:07:37.568 10:46:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65103 00:07:37.568 10:46:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:37.568 10:46:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:37.826 Running I/O for 10 seconds... 00:07:38.759 Latency(us) 00:07:38.759 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:38.759 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:38.759 Nvme0n1 : 1.00 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:07:38.759 =================================================================================================================== 00:07:38.759 Total : 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:07:38.759 00:07:39.693 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 77b34bef-f81e-4075-8982-4b5f532b3ccf 00:07:39.693 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.693 Nvme0n1 : 2.00 7175.50 28.03 0.00 0.00 0.00 0.00 0.00 00:07:39.693 =================================================================================================================== 00:07:39.693 Total : 7175.50 28.03 0.00 0.00 0.00 0.00 0.00 00:07:39.693 00:07:39.951 true 00:07:39.951 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77b34bef-f81e-4075-8982-4b5f532b3ccf 00:07:39.951 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:40.209 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:40.209 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:40.209 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 65103 00:07:40.775 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.775 Nvme0n1 : 3.00 7281.33 28.44 0.00 0.00 0.00 0.00 0.00 00:07:40.775 =================================================================================================================== 00:07:40.775 Total : 7281.33 28.44 0.00 0.00 0.00 0.00 0.00 00:07:40.775 00:07:41.732 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.732 Nvme0n1 : 4.00 7143.75 27.91 0.00 0.00 0.00 0.00 0.00 00:07:41.732 =================================================================================================================== 00:07:41.732 Total : 7143.75 27.91 0.00 0.00 0.00 0.00 0.00 00:07:41.732 00:07:42.668 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.668 Nvme0n1 : 5.00 7137.40 27.88 0.00 0.00 0.00 0.00 0.00 00:07:42.668 =================================================================================================================== 00:07:42.668 Total : 7137.40 27.88 0.00 0.00 0.00 0.00 0.00 00:07:42.668 00:07:43.604 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.604 Nvme0n1 : 6.00 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:07:43.604 =================================================================================================================== 00:07:43.604 Total : 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:07:43.604 00:07:44.978 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:44.978 Nvme0n1 : 7.00 7093.86 27.71 0.00 0.00 0.00 0.00 0.00 00:07:44.978 =================================================================================================================== 00:07:44.978 Total : 7093.86 27.71 0.00 0.00 0.00 0.00 0.00 00:07:44.978 00:07:45.914 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:45.914 Nvme0n1 : 8.00 7096.12 27.72 0.00 0.00 0.00 0.00 0.00 00:07:45.914 =================================================================================================================== 00:07:45.914 Total : 7096.12 27.72 0.00 0.00 0.00 0.00 0.00 00:07:45.914 00:07:46.849 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:46.849 Nvme0n1 : 9.00 7097.89 27.73 0.00 0.00 0.00 0.00 0.00 00:07:46.849 =================================================================================================================== 00:07:46.849 Total : 7097.89 27.73 0.00 0.00 0.00 0.00 0.00 00:07:46.849 00:07:47.786 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:47.786 Nvme0n1 : 10.00 7086.60 27.68 0.00 0.00 0.00 0.00 0.00 00:07:47.786 =================================================================================================================== 00:07:47.786 Total : 7086.60 27.68 0.00 0.00 0.00 0.00 0.00 00:07:47.786 00:07:47.786 00:07:47.786 Latency(us) 00:07:47.786 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:47.786 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:47.786 Nvme0n1 : 10.02 7086.80 27.68 0.00 0.00 18057.26 15609.48 53382.05 00:07:47.786 =================================================================================================================== 00:07:47.786 Total : 7086.80 27.68 0.00 0.00 18057.26 15609.48 53382.05 00:07:47.786 0 00:07:47.786 10:46:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65079 00:07:47.786 10:46:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 65079 ']' 00:07:47.786 10:46:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 65079 00:07:47.786 10:46:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:07:47.786 10:46:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:47.786 10:46:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65079 00:07:47.786 10:46:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:47.786 10:46:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:47.786 killing process with pid 65079 00:07:47.786 10:46:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65079' 00:07:47.786 10:46:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 65079 00:07:47.786 Received shutdown signal, test time was about 10.000000 seconds 00:07:47.786 00:07:47.786 Latency(us) 00:07:47.786 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:47.786 =================================================================================================================== 00:07:47.786 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:47.786 10:46:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 65079 00:07:48.044 10:46:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:48.302 10:46:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:48.561 10:46:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:48.561 10:46:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77b34bef-f81e-4075-8982-4b5f532b3ccf 00:07:48.859 10:46:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:48.859 10:46:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:48.859 10:46:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:49.138 [2024-07-25 10:46:18.706555] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:49.138 10:46:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77b34bef-f81e-4075-8982-4b5f532b3ccf 00:07:49.138 10:46:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:07:49.138 10:46:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77b34bef-f81e-4075-8982-4b5f532b3ccf 00:07:49.138 10:46:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:49.138 10:46:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.138 10:46:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:49.138 10:46:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.138 10:46:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:49.138 10:46:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.138 10:46:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:49.138 10:46:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:49.138 10:46:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77b34bef-f81e-4075-8982-4b5f532b3ccf 00:07:49.396 request: 00:07:49.396 { 00:07:49.396 "uuid": "77b34bef-f81e-4075-8982-4b5f532b3ccf", 00:07:49.396 "method": "bdev_lvol_get_lvstores", 00:07:49.396 "req_id": 1 00:07:49.396 } 00:07:49.396 Got JSON-RPC error response 00:07:49.396 response: 00:07:49.396 { 00:07:49.396 "code": -19, 00:07:49.396 "message": "No such device" 00:07:49.396 } 00:07:49.396 10:46:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:07:49.396 10:46:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:49.396 10:46:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:49.396 10:46:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:49.396 10:46:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:49.655 aio_bdev 00:07:49.655 10:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5683737f-faf0-425d-bf25-11e9c3e39b90 00:07:49.655 10:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=5683737f-faf0-425d-bf25-11e9c3e39b90 00:07:49.655 10:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:49.655 10:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:07:49.655 10:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:49.655 10:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:49.655 10:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:49.655 10:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5683737f-faf0-425d-bf25-11e9c3e39b90 -t 2000 00:07:49.913 [ 00:07:49.913 { 00:07:49.913 "name": "5683737f-faf0-425d-bf25-11e9c3e39b90", 00:07:49.913 "aliases": [ 00:07:49.913 "lvs/lvol" 00:07:49.913 ], 00:07:49.913 "product_name": "Logical Volume", 00:07:49.913 "block_size": 4096, 00:07:49.913 "num_blocks": 38912, 00:07:49.913 "uuid": "5683737f-faf0-425d-bf25-11e9c3e39b90", 00:07:49.913 "assigned_rate_limits": { 00:07:49.913 "rw_ios_per_sec": 0, 00:07:49.913 "rw_mbytes_per_sec": 0, 00:07:49.913 "r_mbytes_per_sec": 0, 00:07:49.913 "w_mbytes_per_sec": 0 00:07:49.913 }, 00:07:49.913 "claimed": false, 00:07:49.913 "zoned": false, 00:07:49.913 "supported_io_types": { 00:07:49.913 "read": true, 00:07:49.913 "write": true, 00:07:49.913 "unmap": true, 00:07:49.913 "flush": false, 00:07:49.913 "reset": true, 00:07:49.913 "nvme_admin": false, 00:07:49.913 "nvme_io": false, 00:07:49.913 "nvme_io_md": false, 00:07:49.913 "write_zeroes": true, 00:07:49.913 "zcopy": false, 00:07:49.913 "get_zone_info": false, 00:07:49.914 "zone_management": false, 00:07:49.914 "zone_append": false, 00:07:49.914 "compare": false, 00:07:49.914 "compare_and_write": false, 00:07:49.914 "abort": false, 00:07:49.914 "seek_hole": true, 00:07:49.914 "seek_data": true, 00:07:49.914 "copy": false, 00:07:49.914 "nvme_iov_md": false 00:07:49.914 }, 00:07:49.914 "driver_specific": { 00:07:49.914 "lvol": { 00:07:49.914 "lvol_store_uuid": "77b34bef-f81e-4075-8982-4b5f532b3ccf", 00:07:49.914 "base_bdev": "aio_bdev", 00:07:49.914 "thin_provision": false, 00:07:49.914 "num_allocated_clusters": 38, 00:07:49.914 "snapshot": false, 00:07:49.914 "clone": false, 00:07:49.914 "esnap_clone": false 00:07:49.914 } 00:07:49.914 } 00:07:49.914 } 00:07:49.914 ] 00:07:49.914 10:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:07:49.914 10:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77b34bef-f81e-4075-8982-4b5f532b3ccf 00:07:49.914 10:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:50.173 10:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:50.173 10:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77b34bef-f81e-4075-8982-4b5f532b3ccf 00:07:50.173 10:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:50.431 10:46:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:50.431 10:46:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 5683737f-faf0-425d-bf25-11e9c3e39b90 00:07:50.689 10:46:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 77b34bef-f81e-4075-8982-4b5f532b3ccf 00:07:50.948 10:46:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:51.206 10:46:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:51.465 00:07:51.465 real 0m17.960s 00:07:51.465 user 0m17.017s 00:07:51.465 sys 0m2.384s 00:07:51.465 10:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.465 10:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:51.465 ************************************ 00:07:51.465 END TEST lvs_grow_clean 00:07:51.465 ************************************ 00:07:51.465 10:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:51.465 10:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:51.465 10:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.465 10:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:51.465 ************************************ 00:07:51.465 START TEST lvs_grow_dirty 00:07:51.465 ************************************ 00:07:51.465 10:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:07:51.465 10:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:51.465 10:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:51.465 10:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:51.465 10:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:51.465 10:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:51.465 10:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:51.465 10:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:51.465 10:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:51.465 10:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:51.723 10:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:51.723 10:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:51.987 10:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=ccee3df3-2ebb-4297-878b-310d9447eec4 00:07:51.987 10:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:51.987 10:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ccee3df3-2ebb-4297-878b-310d9447eec4 00:07:52.245 10:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:52.245 10:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:52.245 10:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ccee3df3-2ebb-4297-878b-310d9447eec4 lvol 150 00:07:52.504 10:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=fdb90fb8-6fee-429d-9ca3-c8156492603a 00:07:52.504 10:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:52.504 10:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:52.764 [2024-07-25 10:46:22.370764] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:52.764 [2024-07-25 10:46:22.370895] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:52.764 true 00:07:52.764 10:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ccee3df3-2ebb-4297-878b-310d9447eec4 00:07:52.764 10:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:53.027 10:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:53.027 10:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:53.291 10:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fdb90fb8-6fee-429d-9ca3-c8156492603a 00:07:53.550 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:53.809 [2024-07-25 10:46:23.299339] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:53.810 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:53.810 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65347 00:07:53.810 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:53.810 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65347 /var/tmp/bdevperf.sock 00:07:53.810 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:53.810 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 65347 ']' 00:07:53.810 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:53.810 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:53.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:53.810 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:53.810 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:53.810 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:54.069 [2024-07-25 10:46:23.581404] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:54.069 [2024-07-25 10:46:23.581497] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65347 ] 00:07:54.069 [2024-07-25 10:46:23.713736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.327 [2024-07-25 10:46:23.844948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.327 [2024-07-25 10:46:23.919127] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:54.895 10:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:54.895 10:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:07:54.895 10:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:55.153 Nvme0n1 00:07:55.153 10:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:55.412 [ 00:07:55.412 { 00:07:55.412 "name": "Nvme0n1", 00:07:55.412 "aliases": [ 00:07:55.412 "fdb90fb8-6fee-429d-9ca3-c8156492603a" 00:07:55.412 ], 00:07:55.412 "product_name": "NVMe disk", 00:07:55.412 "block_size": 4096, 00:07:55.412 "num_blocks": 38912, 00:07:55.412 "uuid": "fdb90fb8-6fee-429d-9ca3-c8156492603a", 00:07:55.412 "assigned_rate_limits": { 00:07:55.412 "rw_ios_per_sec": 0, 00:07:55.412 "rw_mbytes_per_sec": 0, 00:07:55.412 "r_mbytes_per_sec": 0, 00:07:55.412 "w_mbytes_per_sec": 0 00:07:55.412 }, 00:07:55.412 "claimed": false, 00:07:55.412 "zoned": false, 00:07:55.412 "supported_io_types": { 00:07:55.412 "read": true, 00:07:55.412 "write": true, 00:07:55.412 "unmap": true, 00:07:55.412 "flush": true, 00:07:55.412 "reset": true, 00:07:55.412 "nvme_admin": true, 00:07:55.412 "nvme_io": true, 00:07:55.412 "nvme_io_md": false, 00:07:55.412 "write_zeroes": true, 00:07:55.412 "zcopy": false, 00:07:55.412 "get_zone_info": false, 00:07:55.412 "zone_management": false, 00:07:55.412 "zone_append": false, 00:07:55.412 "compare": true, 00:07:55.412 "compare_and_write": true, 00:07:55.412 "abort": true, 00:07:55.412 "seek_hole": false, 00:07:55.412 "seek_data": false, 00:07:55.412 "copy": true, 00:07:55.412 "nvme_iov_md": false 00:07:55.412 }, 00:07:55.412 "memory_domains": [ 00:07:55.412 { 00:07:55.412 "dma_device_id": "system", 00:07:55.412 "dma_device_type": 1 00:07:55.412 } 00:07:55.412 ], 00:07:55.412 "driver_specific": { 00:07:55.412 "nvme": [ 00:07:55.412 { 00:07:55.412 "trid": { 00:07:55.412 "trtype": "TCP", 00:07:55.412 "adrfam": "IPv4", 00:07:55.412 "traddr": "10.0.0.2", 00:07:55.412 "trsvcid": "4420", 00:07:55.412 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:55.412 }, 00:07:55.412 "ctrlr_data": { 00:07:55.412 "cntlid": 1, 00:07:55.412 "vendor_id": "0x8086", 00:07:55.412 "model_number": "SPDK bdev Controller", 00:07:55.412 "serial_number": "SPDK0", 00:07:55.412 "firmware_revision": "24.09", 00:07:55.412 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:55.412 "oacs": { 00:07:55.412 "security": 0, 00:07:55.412 "format": 0, 00:07:55.412 "firmware": 0, 00:07:55.412 "ns_manage": 0 00:07:55.412 }, 00:07:55.412 "multi_ctrlr": true, 00:07:55.412 "ana_reporting": false 00:07:55.412 }, 00:07:55.412 "vs": { 00:07:55.412 "nvme_version": "1.3" 00:07:55.412 }, 00:07:55.412 "ns_data": { 00:07:55.412 "id": 1, 00:07:55.412 "can_share": true 00:07:55.412 } 00:07:55.412 } 00:07:55.412 ], 00:07:55.412 "mp_policy": "active_passive" 00:07:55.412 } 00:07:55.412 } 00:07:55.412 ] 00:07:55.412 10:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:55.412 10:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65366 00:07:55.412 10:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:55.412 Running I/O for 10 seconds... 00:07:56.788 Latency(us) 00:07:56.788 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:56.788 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.788 Nvme0n1 : 1.00 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:07:56.788 =================================================================================================================== 00:07:56.788 Total : 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:07:56.788 00:07:57.355 10:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ccee3df3-2ebb-4297-878b-310d9447eec4 00:07:57.614 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.614 Nvme0n1 : 2.00 7175.50 28.03 0.00 0.00 0.00 0.00 0.00 00:07:57.614 =================================================================================================================== 00:07:57.614 Total : 7175.50 28.03 0.00 0.00 0.00 0.00 0.00 00:07:57.614 00:07:57.614 true 00:07:57.614 10:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:57.614 10:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ccee3df3-2ebb-4297-878b-310d9447eec4 00:07:58.192 10:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:58.192 10:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:58.192 10:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 65366 00:07:58.462 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.462 Nvme0n1 : 3.00 7027.33 27.45 0.00 0.00 0.00 0.00 0.00 00:07:58.462 =================================================================================================================== 00:07:58.462 Total : 7027.33 27.45 0.00 0.00 0.00 0.00 0.00 00:07:58.462 00:07:59.397 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.397 Nvme0n1 : 4.00 6889.75 26.91 0.00 0.00 0.00 0.00 0.00 00:07:59.397 =================================================================================================================== 00:07:59.397 Total : 6889.75 26.91 0.00 0.00 0.00 0.00 0.00 00:07:59.397 00:08:00.772 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.772 Nvme0n1 : 5.00 6832.60 26.69 0.00 0.00 0.00 0.00 0.00 00:08:00.772 =================================================================================================================== 00:08:00.772 Total : 6832.60 26.69 0.00 0.00 0.00 0.00 0.00 00:08:00.772 00:08:01.706 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.706 Nvme0n1 : 6.00 6836.83 26.71 0.00 0.00 0.00 0.00 0.00 00:08:01.706 =================================================================================================================== 00:08:01.706 Total : 6836.83 26.71 0.00 0.00 0.00 0.00 0.00 00:08:01.706 00:08:02.640 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.640 Nvme0n1 : 7.00 6707.14 26.20 0.00 0.00 0.00 0.00 0.00 00:08:02.640 =================================================================================================================== 00:08:02.640 Total : 6707.14 26.20 0.00 0.00 0.00 0.00 0.00 00:08:02.640 00:08:03.593 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.593 Nvme0n1 : 8.00 6757.75 26.40 0.00 0.00 0.00 0.00 0.00 00:08:03.593 =================================================================================================================== 00:08:03.593 Total : 6757.75 26.40 0.00 0.00 0.00 0.00 0.00 00:08:03.593 00:08:04.526 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.526 Nvme0n1 : 9.00 6768.89 26.44 0.00 0.00 0.00 0.00 0.00 00:08:04.526 =================================================================================================================== 00:08:04.527 Total : 6768.89 26.44 0.00 0.00 0.00 0.00 0.00 00:08:04.527 00:08:05.462 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:05.462 Nvme0n1 : 10.00 6790.50 26.53 0.00 0.00 0.00 0.00 0.00 00:08:05.462 =================================================================================================================== 00:08:05.462 Total : 6790.50 26.53 0.00 0.00 0.00 0.00 0.00 00:08:05.462 00:08:05.462 00:08:05.462 Latency(us) 00:08:05.462 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:05.463 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:05.463 Nvme0n1 : 10.00 6800.74 26.57 0.00 0.00 18816.50 14417.92 214481.45 00:08:05.463 =================================================================================================================== 00:08:05.463 Total : 6800.74 26.57 0.00 0.00 18816.50 14417.92 214481.45 00:08:05.463 0 00:08:05.463 10:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65347 00:08:05.463 10:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 65347 ']' 00:08:05.463 10:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 65347 00:08:05.463 10:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:08:05.463 10:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:05.463 10:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65347 00:08:05.463 10:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:05.463 10:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:05.463 killing process with pid 65347 00:08:05.463 10:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65347' 00:08:05.463 Received shutdown signal, test time was about 10.000000 seconds 00:08:05.463 00:08:05.463 Latency(us) 00:08:05.463 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:05.463 =================================================================================================================== 00:08:05.463 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:05.463 10:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 65347 00:08:05.463 10:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 65347 00:08:05.721 10:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:05.979 10:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:06.237 10:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ccee3df3-2ebb-4297-878b-310d9447eec4 00:08:06.237 10:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:06.496 10:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:06.496 10:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:06.496 10:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 64997 00:08:06.496 10:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 64997 00:08:06.496 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 64997 Killed "${NVMF_APP[@]}" "$@" 00:08:06.496 10:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:06.496 10:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:06.496 10:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:06.496 10:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:06.496 10:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:06.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.496 10:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=65499 00:08:06.496 10:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:06.496 10:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 65499 00:08:06.496 10:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 65499 ']' 00:08:06.496 10:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.496 10:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:06.496 10:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.496 10:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:06.496 10:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:06.496 [2024-07-25 10:46:36.213445] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:06.496 [2024-07-25 10:46:36.213792] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.754 [2024-07-25 10:46:36.352555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.754 [2024-07-25 10:46:36.473301] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:06.754 [2024-07-25 10:46:36.473612] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:06.754 [2024-07-25 10:46:36.473744] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:06.754 [2024-07-25 10:46:36.473758] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:06.754 [2024-07-25 10:46:36.473767] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:06.754 [2024-07-25 10:46:36.473797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.011 [2024-07-25 10:46:36.547632] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:07.577 10:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:07.577 10:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:07.577 10:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:07.577 10:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:07.577 10:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:07.577 10:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:07.577 10:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:07.835 [2024-07-25 10:46:37.425377] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:07.835 [2024-07-25 10:46:37.425936] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:07.835 [2024-07-25 10:46:37.426289] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:07.835 10:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:07.835 10:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev fdb90fb8-6fee-429d-9ca3-c8156492603a 00:08:07.835 10:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=fdb90fb8-6fee-429d-9ca3-c8156492603a 00:08:07.835 10:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:07.835 10:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:07.835 10:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:07.835 10:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:07.835 10:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:08.111 10:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fdb90fb8-6fee-429d-9ca3-c8156492603a -t 2000 00:08:08.393 [ 00:08:08.393 { 00:08:08.393 "name": "fdb90fb8-6fee-429d-9ca3-c8156492603a", 00:08:08.393 "aliases": [ 00:08:08.393 "lvs/lvol" 00:08:08.393 ], 00:08:08.393 "product_name": "Logical Volume", 00:08:08.393 "block_size": 4096, 00:08:08.393 "num_blocks": 38912, 00:08:08.393 "uuid": "fdb90fb8-6fee-429d-9ca3-c8156492603a", 00:08:08.393 "assigned_rate_limits": { 00:08:08.393 "rw_ios_per_sec": 0, 00:08:08.393 "rw_mbytes_per_sec": 0, 00:08:08.393 "r_mbytes_per_sec": 0, 00:08:08.393 "w_mbytes_per_sec": 0 00:08:08.393 }, 00:08:08.393 "claimed": false, 00:08:08.393 "zoned": false, 00:08:08.393 "supported_io_types": { 00:08:08.393 "read": true, 00:08:08.393 "write": true, 00:08:08.393 "unmap": true, 00:08:08.393 "flush": false, 00:08:08.393 "reset": true, 00:08:08.393 "nvme_admin": false, 00:08:08.393 "nvme_io": false, 00:08:08.393 "nvme_io_md": false, 00:08:08.393 "write_zeroes": true, 00:08:08.393 "zcopy": false, 00:08:08.393 "get_zone_info": false, 00:08:08.393 "zone_management": false, 00:08:08.393 "zone_append": false, 00:08:08.393 "compare": false, 00:08:08.393 "compare_and_write": false, 00:08:08.393 "abort": false, 00:08:08.393 "seek_hole": true, 00:08:08.393 "seek_data": true, 00:08:08.393 "copy": false, 00:08:08.393 "nvme_iov_md": false 00:08:08.393 }, 00:08:08.393 "driver_specific": { 00:08:08.393 "lvol": { 00:08:08.393 "lvol_store_uuid": "ccee3df3-2ebb-4297-878b-310d9447eec4", 00:08:08.393 "base_bdev": "aio_bdev", 00:08:08.393 "thin_provision": false, 00:08:08.393 "num_allocated_clusters": 38, 00:08:08.393 "snapshot": false, 00:08:08.393 "clone": false, 00:08:08.393 "esnap_clone": false 00:08:08.393 } 00:08:08.393 } 00:08:08.393 } 00:08:08.393 ] 00:08:08.393 10:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:08.393 10:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ccee3df3-2ebb-4297-878b-310d9447eec4 00:08:08.393 10:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:08.651 10:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:08.651 10:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ccee3df3-2ebb-4297-878b-310d9447eec4 00:08:08.651 10:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:08.910 10:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:08.910 10:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:09.168 [2024-07-25 10:46:38.710879] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:09.168 10:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ccee3df3-2ebb-4297-878b-310d9447eec4 00:08:09.168 10:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:09.168 10:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ccee3df3-2ebb-4297-878b-310d9447eec4 00:08:09.168 10:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:09.168 10:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:09.168 10:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:09.168 10:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:09.168 10:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:09.168 10:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:09.168 10:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:09.168 10:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:09.168 10:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ccee3df3-2ebb-4297-878b-310d9447eec4 00:08:09.427 request: 00:08:09.427 { 00:08:09.427 "uuid": "ccee3df3-2ebb-4297-878b-310d9447eec4", 00:08:09.427 "method": "bdev_lvol_get_lvstores", 00:08:09.427 "req_id": 1 00:08:09.427 } 00:08:09.427 Got JSON-RPC error response 00:08:09.427 response: 00:08:09.427 { 00:08:09.427 "code": -19, 00:08:09.427 "message": "No such device" 00:08:09.427 } 00:08:09.427 10:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:09.427 10:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:09.427 10:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:09.427 10:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:09.427 10:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:09.685 aio_bdev 00:08:09.685 10:46:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev fdb90fb8-6fee-429d-9ca3-c8156492603a 00:08:09.685 10:46:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=fdb90fb8-6fee-429d-9ca3-c8156492603a 00:08:09.685 10:46:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:09.685 10:46:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:09.685 10:46:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:09.685 10:46:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:09.685 10:46:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:09.944 10:46:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fdb90fb8-6fee-429d-9ca3-c8156492603a -t 2000 00:08:10.203 [ 00:08:10.203 { 00:08:10.203 "name": "fdb90fb8-6fee-429d-9ca3-c8156492603a", 00:08:10.203 "aliases": [ 00:08:10.203 "lvs/lvol" 00:08:10.203 ], 00:08:10.203 "product_name": "Logical Volume", 00:08:10.203 "block_size": 4096, 00:08:10.203 "num_blocks": 38912, 00:08:10.203 "uuid": "fdb90fb8-6fee-429d-9ca3-c8156492603a", 00:08:10.203 "assigned_rate_limits": { 00:08:10.203 "rw_ios_per_sec": 0, 00:08:10.203 "rw_mbytes_per_sec": 0, 00:08:10.203 "r_mbytes_per_sec": 0, 00:08:10.203 "w_mbytes_per_sec": 0 00:08:10.203 }, 00:08:10.203 "claimed": false, 00:08:10.203 "zoned": false, 00:08:10.203 "supported_io_types": { 00:08:10.203 "read": true, 00:08:10.203 "write": true, 00:08:10.203 "unmap": true, 00:08:10.203 "flush": false, 00:08:10.203 "reset": true, 00:08:10.203 "nvme_admin": false, 00:08:10.203 "nvme_io": false, 00:08:10.203 "nvme_io_md": false, 00:08:10.203 "write_zeroes": true, 00:08:10.203 "zcopy": false, 00:08:10.203 "get_zone_info": false, 00:08:10.203 "zone_management": false, 00:08:10.203 "zone_append": false, 00:08:10.203 "compare": false, 00:08:10.203 "compare_and_write": false, 00:08:10.203 "abort": false, 00:08:10.203 "seek_hole": true, 00:08:10.203 "seek_data": true, 00:08:10.203 "copy": false, 00:08:10.203 "nvme_iov_md": false 00:08:10.203 }, 00:08:10.203 "driver_specific": { 00:08:10.203 "lvol": { 00:08:10.203 "lvol_store_uuid": "ccee3df3-2ebb-4297-878b-310d9447eec4", 00:08:10.203 "base_bdev": "aio_bdev", 00:08:10.203 "thin_provision": false, 00:08:10.203 "num_allocated_clusters": 38, 00:08:10.203 "snapshot": false, 00:08:10.203 "clone": false, 00:08:10.203 "esnap_clone": false 00:08:10.203 } 00:08:10.203 } 00:08:10.203 } 00:08:10.203 ] 00:08:10.203 10:46:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:10.203 10:46:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ccee3df3-2ebb-4297-878b-310d9447eec4 00:08:10.203 10:46:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:10.203 10:46:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:10.203 10:46:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ccee3df3-2ebb-4297-878b-310d9447eec4 00:08:10.203 10:46:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:10.462 10:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:10.462 10:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete fdb90fb8-6fee-429d-9ca3-c8156492603a 00:08:10.720 10:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ccee3df3-2ebb-4297-878b-310d9447eec4 00:08:10.979 10:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:11.237 10:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:11.805 ************************************ 00:08:11.805 END TEST lvs_grow_dirty 00:08:11.805 ************************************ 00:08:11.805 00:08:11.805 real 0m20.139s 00:08:11.805 user 0m41.235s 00:08:11.805 sys 0m9.033s 00:08:11.805 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:11.805 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:11.805 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:11.805 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:08:11.805 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:08:11.805 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:08:11.805 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:11.805 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:08:11.805 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:08:11.805 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:08:11.805 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:11.805 nvmf_trace.0 00:08:11.805 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:08:11.805 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:11.805 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:11.805 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:08:12.063 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:12.063 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:08:12.063 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:12.063 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:12.063 rmmod nvme_tcp 00:08:12.063 rmmod nvme_fabrics 00:08:12.063 rmmod nvme_keyring 00:08:12.063 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:12.063 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:08:12.063 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:08:12.063 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 65499 ']' 00:08:12.063 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 65499 00:08:12.063 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 65499 ']' 00:08:12.063 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 65499 00:08:12.063 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:08:12.063 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:12.063 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65499 00:08:12.063 killing process with pid 65499 00:08:12.063 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:12.063 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:12.063 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65499' 00:08:12.063 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 65499 00:08:12.063 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 65499 00:08:12.322 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:12.322 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:12.322 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:12.322 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:12.322 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:12.322 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.322 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:12.322 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.322 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:12.322 00:08:12.322 real 0m40.709s 00:08:12.322 user 1m4.599s 00:08:12.322 sys 0m12.185s 00:08:12.322 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:12.322 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:12.322 ************************************ 00:08:12.322 END TEST nvmf_lvs_grow 00:08:12.322 ************************************ 00:08:12.322 10:46:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:12.322 10:46:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:12.322 10:46:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:12.322 10:46:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:12.322 ************************************ 00:08:12.322 START TEST nvmf_bdev_io_wait 00:08:12.322 ************************************ 00:08:12.322 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:12.581 * Looking for test storage... 00:08:12.581 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:12.581 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:12.581 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:12.581 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:12.581 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:12.581 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:12.581 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:12.581 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:12.581 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:12.581 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:12.581 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:12.581 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:12.581 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:12.581 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:08:12.581 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=bb4b8bd3-cfb4-4368-bf29-91254747069c 00:08:12.581 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:12.581 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:12.581 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:12.581 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:12.581 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:12.581 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:12.581 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:12.581 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:12.581 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.581 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.581 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.581 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:12.581 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.581 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:08:12.581 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:12.582 Cannot find device "nvmf_tgt_br" 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:12.582 Cannot find device "nvmf_tgt_br2" 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:12.582 Cannot find device "nvmf_tgt_br" 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:12.582 Cannot find device "nvmf_tgt_br2" 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:12.582 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:12.582 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:12.582 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:12.841 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:12.841 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:12.841 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:12.841 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:12.841 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:12.841 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:12.841 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:12.841 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:12.841 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:12.841 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:12.841 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:12.841 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:12.841 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:12.841 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:12.841 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:12.841 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:12.842 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:12.842 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:12.842 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:12.842 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:08:12.842 00:08:12.842 --- 10.0.0.2 ping statistics --- 00:08:12.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.842 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:08:12.842 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:12.842 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:12.842 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:08:12.842 00:08:12.842 --- 10.0.0.3 ping statistics --- 00:08:12.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.842 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:08:12.842 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:12.842 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:12.842 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:08:12.842 00:08:12.842 --- 10.0.0.1 ping statistics --- 00:08:12.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.842 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:08:12.842 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:12.842 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:08:12.842 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:12.842 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:12.842 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:12.842 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:12.842 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:12.842 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:12.842 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:12.842 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:12.842 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:12.842 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:12.842 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:12.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.842 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=65818 00:08:12.842 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 65818 00:08:12.842 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:12.842 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 65818 ']' 00:08:12.842 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.842 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:12.842 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.842 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:12.842 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:12.842 [2024-07-25 10:46:42.541035] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:12.842 [2024-07-25 10:46:42.541283] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:13.100 [2024-07-25 10:46:42.678685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:13.100 [2024-07-25 10:46:42.802713] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:13.100 [2024-07-25 10:46:42.803016] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:13.100 [2024-07-25 10:46:42.803151] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:13.100 [2024-07-25 10:46:42.803202] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:13.100 [2024-07-25 10:46:42.803289] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:13.100 [2024-07-25 10:46:42.803516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.100 [2024-07-25 10:46:42.803645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:13.100 [2024-07-25 10:46:42.803835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:13.100 [2024-07-25 10:46:42.803840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.036 [2024-07-25 10:46:43.625622] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.036 [2024-07-25 10:46:43.642901] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.036 Malloc0 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.036 [2024-07-25 10:46:43.721248] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=65853 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=65855 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:14.036 { 00:08:14.036 "params": { 00:08:14.036 "name": "Nvme$subsystem", 00:08:14.036 "trtype": "$TEST_TRANSPORT", 00:08:14.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:14.036 "adrfam": "ipv4", 00:08:14.036 "trsvcid": "$NVMF_PORT", 00:08:14.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:14.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:14.036 "hdgst": ${hdgst:-false}, 00:08:14.036 "ddgst": ${ddgst:-false} 00:08:14.036 }, 00:08:14.036 "method": "bdev_nvme_attach_controller" 00:08:14.036 } 00:08:14.036 EOF 00:08:14.036 )") 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=65857 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:14.036 { 00:08:14.036 "params": { 00:08:14.036 "name": "Nvme$subsystem", 00:08:14.036 "trtype": "$TEST_TRANSPORT", 00:08:14.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:14.036 "adrfam": "ipv4", 00:08:14.036 "trsvcid": "$NVMF_PORT", 00:08:14.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:14.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:14.036 "hdgst": ${hdgst:-false}, 00:08:14.036 "ddgst": ${ddgst:-false} 00:08:14.036 }, 00:08:14.036 "method": "bdev_nvme_attach_controller" 00:08:14.036 } 00:08:14.036 EOF 00:08:14.036 )") 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:14.036 { 00:08:14.036 "params": { 00:08:14.036 "name": "Nvme$subsystem", 00:08:14.036 "trtype": "$TEST_TRANSPORT", 00:08:14.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:14.036 "adrfam": "ipv4", 00:08:14.036 "trsvcid": "$NVMF_PORT", 00:08:14.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:14.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:14.036 "hdgst": ${hdgst:-false}, 00:08:14.036 "ddgst": ${ddgst:-false} 00:08:14.036 }, 00:08:14.036 "method": "bdev_nvme_attach_controller" 00:08:14.036 } 00:08:14.036 EOF 00:08:14.036 )") 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:14.036 { 00:08:14.036 "params": { 00:08:14.036 "name": "Nvme$subsystem", 00:08:14.036 "trtype": "$TEST_TRANSPORT", 00:08:14.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:14.036 "adrfam": "ipv4", 00:08:14.036 "trsvcid": "$NVMF_PORT", 00:08:14.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:14.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:14.036 "hdgst": ${hdgst:-false}, 00:08:14.036 "ddgst": ${ddgst:-false} 00:08:14.036 }, 00:08:14.036 "method": "bdev_nvme_attach_controller" 00:08:14.036 } 00:08:14.036 EOF 00:08:14.036 )") 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=65861 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:14.036 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:14.036 "params": { 00:08:14.037 "name": "Nvme1", 00:08:14.037 "trtype": "tcp", 00:08:14.037 "traddr": "10.0.0.2", 00:08:14.037 "adrfam": "ipv4", 00:08:14.037 "trsvcid": "4420", 00:08:14.037 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:14.037 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:14.037 "hdgst": false, 00:08:14.037 "ddgst": false 00:08:14.037 }, 00:08:14.037 "method": "bdev_nvme_attach_controller" 00:08:14.037 }' 00:08:14.037 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:14.037 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:14.037 "params": { 00:08:14.037 "name": "Nvme1", 00:08:14.037 "trtype": "tcp", 00:08:14.037 "traddr": "10.0.0.2", 00:08:14.037 "adrfam": "ipv4", 00:08:14.037 "trsvcid": "4420", 00:08:14.037 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:14.037 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:14.037 "hdgst": false, 00:08:14.037 "ddgst": false 00:08:14.037 }, 00:08:14.037 "method": "bdev_nvme_attach_controller" 00:08:14.037 }' 00:08:14.037 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:14.037 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:14.037 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:14.037 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:14.037 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:14.037 "params": { 00:08:14.037 "name": "Nvme1", 00:08:14.037 "trtype": "tcp", 00:08:14.037 "traddr": "10.0.0.2", 00:08:14.037 "adrfam": "ipv4", 00:08:14.037 "trsvcid": "4420", 00:08:14.037 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:14.037 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:14.037 "hdgst": false, 00:08:14.037 "ddgst": false 00:08:14.037 }, 00:08:14.037 "method": "bdev_nvme_attach_controller" 00:08:14.037 }' 00:08:14.037 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:14.037 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:14.037 "params": { 00:08:14.037 "name": "Nvme1", 00:08:14.037 "trtype": "tcp", 00:08:14.037 "traddr": "10.0.0.2", 00:08:14.037 "adrfam": "ipv4", 00:08:14.037 "trsvcid": "4420", 00:08:14.037 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:14.037 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:14.037 "hdgst": false, 00:08:14.037 "ddgst": false 00:08:14.037 }, 00:08:14.037 "method": "bdev_nvme_attach_controller" 00:08:14.037 }' 00:08:14.294 [2024-07-25 10:46:43.792566] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:14.294 [2024-07-25 10:46:43.792932] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:14.294 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 65853 00:08:14.294 [2024-07-25 10:46:43.804294] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:14.294 [2024-07-25 10:46:43.804778] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:14.294 [2024-07-25 10:46:43.809033] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:14.294 [2024-07-25 10:46:43.809138] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:14.294 [2024-07-25 10:46:43.813806] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:14.294 [2024-07-25 10:46:43.813896] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:14.294 [2024-07-25 10:46:44.020941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.553 [2024-07-25 10:46:44.097032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.553 [2024-07-25 10:46:44.113352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:08:14.553 [2024-07-25 10:46:44.172151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.553 [2024-07-25 10:46:44.186860] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:14.553 [2024-07-25 10:46:44.200311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:08:14.553 [2024-07-25 10:46:44.249531] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:14.553 [2024-07-25 10:46:44.264697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:08:14.553 [2024-07-25 10:46:44.266825] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.553 Running I/O for 1 seconds... 00:08:14.811 [2024-07-25 10:46:44.310776] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:14.811 Running I/O for 1 seconds... 00:08:14.811 [2024-07-25 10:46:44.403415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:14.811 Running I/O for 1 seconds... 00:08:14.811 [2024-07-25 10:46:44.450170] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:15.070 Running I/O for 1 seconds... 00:08:15.638 00:08:15.638 Latency(us) 00:08:15.638 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:15.638 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:15.638 Nvme1n1 : 1.00 174930.68 683.32 0.00 0.00 728.99 364.92 893.67 00:08:15.638 =================================================================================================================== 00:08:15.638 Total : 174930.68 683.32 0.00 0.00 728.99 364.92 893.67 00:08:15.638 00:08:15.638 Latency(us) 00:08:15.638 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:15.638 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:15.638 Nvme1n1 : 1.01 9061.61 35.40 0.00 0.00 14061.04 8043.05 20137.43 00:08:15.638 =================================================================================================================== 00:08:15.638 Total : 9061.61 35.40 0.00 0.00 14061.04 8043.05 20137.43 00:08:15.898 00:08:15.898 Latency(us) 00:08:15.898 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:15.898 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:15.898 Nvme1n1 : 1.02 4963.79 19.39 0.00 0.00 25523.83 12332.68 36223.53 00:08:15.898 =================================================================================================================== 00:08:15.898 Total : 4963.79 19.39 0.00 0.00 25523.83 12332.68 36223.53 00:08:15.898 00:08:15.898 Latency(us) 00:08:15.898 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:15.898 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:15.898 Nvme1n1 : 1.01 6575.15 25.68 0.00 0.00 19366.26 1727.77 27286.81 00:08:15.898 =================================================================================================================== 00:08:15.898 Total : 6575.15 25.68 0.00 0.00 19366.26 1727.77 27286.81 00:08:16.157 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 65855 00:08:16.157 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 65857 00:08:16.157 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 65861 00:08:16.157 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:16.157 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.157 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:16.157 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.157 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:16.157 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:16.157 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:16.157 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:08:16.157 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:16.157 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:08:16.157 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:16.157 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:16.157 rmmod nvme_tcp 00:08:16.157 rmmod nvme_fabrics 00:08:16.157 rmmod nvme_keyring 00:08:16.416 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:16.416 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:08:16.416 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:08:16.416 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 65818 ']' 00:08:16.416 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 65818 00:08:16.416 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 65818 ']' 00:08:16.416 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 65818 00:08:16.416 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:08:16.416 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:16.416 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65818 00:08:16.416 killing process with pid 65818 00:08:16.416 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:16.416 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:16.416 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65818' 00:08:16.416 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 65818 00:08:16.416 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 65818 00:08:16.675 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:16.675 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:16.675 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:16.675 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:16.675 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:16.675 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.675 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:16.675 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.675 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:16.675 00:08:16.676 real 0m4.195s 00:08:16.676 user 0m18.301s 00:08:16.676 sys 0m2.223s 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:16.676 ************************************ 00:08:16.676 END TEST nvmf_bdev_io_wait 00:08:16.676 ************************************ 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:16.676 ************************************ 00:08:16.676 START TEST nvmf_queue_depth 00:08:16.676 ************************************ 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:16.676 * Looking for test storage... 00:08:16.676 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=bb4b8bd3-cfb4-4368-bf29-91254747069c 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:16.676 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:16.936 Cannot find device "nvmf_tgt_br" 00:08:16.936 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:08:16.936 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:16.936 Cannot find device "nvmf_tgt_br2" 00:08:16.936 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:08:16.936 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:16.936 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:16.936 Cannot find device "nvmf_tgt_br" 00:08:16.936 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:08:16.936 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:16.936 Cannot find device "nvmf_tgt_br2" 00:08:16.936 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:08:16.936 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:16.936 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:16.936 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:16.936 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:16.936 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:08:16.936 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:16.936 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:16.936 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:08:16.936 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:16.936 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:16.936 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:16.936 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:16.936 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:16.936 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:16.936 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:16.936 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:16.936 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:16.936 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:16.936 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:16.936 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:16.936 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:16.936 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:16.936 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:16.936 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:16.936 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:16.936 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:16.936 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:17.203 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:17.203 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:17.203 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:17.203 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:17.203 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:17.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:17.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:08:17.203 00:08:17.203 --- 10.0.0.2 ping statistics --- 00:08:17.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.203 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:08:17.203 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:17.203 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:17.203 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:08:17.203 00:08:17.203 --- 10.0.0.3 ping statistics --- 00:08:17.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.203 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:08:17.203 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:17.203 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:17.203 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:08:17.203 00:08:17.203 --- 10.0.0.1 ping statistics --- 00:08:17.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.203 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:08:17.203 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:17.203 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:08:17.203 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:17.203 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:17.203 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:17.203 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:17.203 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:17.203 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:17.203 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:17.203 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:17.203 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:17.203 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:17.203 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:17.203 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=66093 00:08:17.203 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 66093 00:08:17.203 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 66093 ']' 00:08:17.203 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.203 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:17.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.203 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.203 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:17.203 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:17.203 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:17.203 [2024-07-25 10:46:46.795966] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:17.204 [2024-07-25 10:46:46.796054] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:17.204 [2024-07-25 10:46:46.936393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.462 [2024-07-25 10:46:47.043959] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:17.462 [2024-07-25 10:46:47.044018] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:17.462 [2024-07-25 10:46:47.044030] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:17.462 [2024-07-25 10:46:47.044042] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:17.462 [2024-07-25 10:46:47.044053] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:17.462 [2024-07-25 10:46:47.044097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.462 [2024-07-25 10:46:47.097087] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:18.029 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:18.029 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:18.029 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:18.029 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:18.029 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:18.029 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:18.029 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:18.029 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.029 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:18.029 [2024-07-25 10:46:47.752607] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:18.029 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.029 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:18.029 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.029 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:18.288 Malloc0 00:08:18.288 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.288 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:18.288 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.288 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:18.288 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.288 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:18.288 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.288 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:18.288 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.288 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:18.288 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.288 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:18.288 [2024-07-25 10:46:47.817403] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:18.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:18.288 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.288 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=66125 00:08:18.288 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:18.288 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:18.289 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 66125 /var/tmp/bdevperf.sock 00:08:18.289 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 66125 ']' 00:08:18.289 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:18.289 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:18.289 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:18.289 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:18.289 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:18.289 [2024-07-25 10:46:47.878184] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:18.289 [2024-07-25 10:46:47.878795] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66125 ] 00:08:18.289 [2024-07-25 10:46:48.017551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.547 [2024-07-25 10:46:48.145902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.547 [2024-07-25 10:46:48.220899] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:19.114 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:19.114 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:19.114 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:19.114 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.114 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:19.373 NVMe0n1 00:08:19.373 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.373 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:19.373 Running I/O for 10 seconds... 00:08:31.595 00:08:31.595 Latency(us) 00:08:31.595 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:31.595 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:31.595 Verification LBA range: start 0x0 length 0x4000 00:08:31.595 NVMe0n1 : 10.09 7733.69 30.21 0.00 0.00 131818.28 24069.59 98184.84 00:08:31.595 =================================================================================================================== 00:08:31.595 Total : 7733.69 30.21 0.00 0.00 131818.28 24069.59 98184.84 00:08:31.595 0 00:08:31.595 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 66125 00:08:31.595 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 66125 ']' 00:08:31.595 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 66125 00:08:31.595 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:31.595 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:31.595 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66125 00:08:31.595 killing process with pid 66125 00:08:31.595 Received shutdown signal, test time was about 10.000000 seconds 00:08:31.595 00:08:31.595 Latency(us) 00:08:31.595 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:31.595 =================================================================================================================== 00:08:31.595 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:31.595 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:31.596 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:31.596 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66125' 00:08:31.596 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 66125 00:08:31.596 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 66125 00:08:31.596 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:31.596 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:31.596 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:31.596 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:08:31.596 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:31.596 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:08:31.596 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:31.596 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:31.596 rmmod nvme_tcp 00:08:31.596 rmmod nvme_fabrics 00:08:31.596 rmmod nvme_keyring 00:08:31.596 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:31.596 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:08:31.596 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:08:31.596 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 66093 ']' 00:08:31.596 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 66093 00:08:31.596 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 66093 ']' 00:08:31.596 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 66093 00:08:31.596 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:31.596 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:31.596 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66093 00:08:31.596 killing process with pid 66093 00:08:31.596 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:31.596 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:31.596 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66093' 00:08:31.596 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 66093 00:08:31.596 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 66093 00:08:31.596 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:31.596 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:31.596 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:31.596 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:31.596 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:31.596 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.596 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:31.596 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.596 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:31.596 00:08:31.596 real 0m13.624s 00:08:31.596 user 0m23.422s 00:08:31.596 sys 0m2.374s 00:08:31.596 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:31.596 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:31.596 ************************************ 00:08:31.596 END TEST nvmf_queue_depth 00:08:31.596 ************************************ 00:08:31.596 10:46:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:31.596 10:46:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:31.596 10:46:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:31.596 10:46:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:31.596 ************************************ 00:08:31.596 START TEST nvmf_target_multipath 00:08:31.596 ************************************ 00:08:31.596 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:31.596 * Looking for test storage... 00:08:31.596 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:31.596 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:31.596 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:31.596 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:31.596 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:31.596 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:31.596 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:31.596 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:31.596 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:31.596 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:31.596 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:31.596 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:31.596 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:31.596 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:08:31.596 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=bb4b8bd3-cfb4-4368-bf29-91254747069c 00:08:31.596 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:31.596 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:31.596 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:31.596 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:31.596 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:31.596 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:31.596 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:31.596 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:31.596 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.596 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.596 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.596 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:31.596 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.596 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:08:31.596 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:31.596 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:31.596 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:31.596 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:31.596 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:31.596 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:31.596 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:31.596 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:31.596 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:31.596 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:31.597 Cannot find device "nvmf_tgt_br" 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:31.597 Cannot find device "nvmf_tgt_br2" 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:31.597 Cannot find device "nvmf_tgt_br" 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:31.597 Cannot find device "nvmf_tgt_br2" 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:31.597 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:31.597 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:31.597 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:31.597 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:08:31.597 00:08:31.597 --- 10.0.0.2 ping statistics --- 00:08:31.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.597 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:31.597 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:31.597 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:08:31.597 00:08:31.597 --- 10.0.0.3 ping statistics --- 00:08:31.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.597 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:31.597 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:31.597 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:08:31.597 00:08:31.597 --- 10.0.0.1 ping statistics --- 00:08:31.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.597 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:08:31.597 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:08:31.598 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:08:31.598 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:31.598 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:31.598 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:31.598 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=66446 00:08:31.598 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:31.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.598 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 66446 00:08:31.598 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@831 -- # '[' -z 66446 ']' 00:08:31.598 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.598 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:31.598 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.598 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:31.598 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:31.598 [2024-07-25 10:47:00.495897] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:31.598 [2024-07-25 10:47:00.496024] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.598 [2024-07-25 10:47:00.639310] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:31.598 [2024-07-25 10:47:00.779623] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:31.598 [2024-07-25 10:47:00.779924] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:31.598 [2024-07-25 10:47:00.780087] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:31.598 [2024-07-25 10:47:00.780363] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:31.598 [2024-07-25 10:47:00.780406] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:31.598 [2024-07-25 10:47:00.780619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.598 [2024-07-25 10:47:00.780736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.598 [2024-07-25 10:47:00.780820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:31.598 [2024-07-25 10:47:00.780830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.598 [2024-07-25 10:47:00.858428] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:31.598 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:31.598 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # return 0 00:08:31.598 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:31.598 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:31.598 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:31.598 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:31.598 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:31.598 [2024-07-25 10:47:01.254432] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:31.598 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:08:32.165 Malloc0 00:08:32.165 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:08:32.423 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:32.682 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:32.940 [2024-07-25 10:47:02.430671] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:32.940 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:33.199 [2024-07-25 10:47:02.707049] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:33.199 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid=bb4b8bd3-cfb4-4368-bf29-91254747069c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:08:33.199 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid=bb4b8bd3-cfb4-4368-bf29-91254747069c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:08:33.458 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:08:33.458 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:08:33.458 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:33.458 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:33.458 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:08:35.361 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:35.361 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:35.361 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:35.361 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:35.361 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:35.361 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:08:35.361 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:08:35.361 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:08:35.361 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:08:35.361 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:35.361 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:08:35.361 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:08:35.361 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:08:35.361 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:08:35.361 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:08:35.361 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:08:35.361 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:08:35.361 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:08:35.361 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:08:35.361 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:08:35.361 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:35.361 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:35.361 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:35.361 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:35.361 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:35.361 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:08:35.361 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:35.361 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:35.361 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:35.361 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:35.361 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:35.361 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:08:35.361 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=66534 00:08:35.361 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:35.361 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:08:35.361 [global] 00:08:35.361 thread=1 00:08:35.361 invalidate=1 00:08:35.361 rw=randrw 00:08:35.361 time_based=1 00:08:35.361 runtime=6 00:08:35.361 ioengine=libaio 00:08:35.361 direct=1 00:08:35.361 bs=4096 00:08:35.361 iodepth=128 00:08:35.361 norandommap=0 00:08:35.361 numjobs=1 00:08:35.361 00:08:35.361 verify_dump=1 00:08:35.361 verify_backlog=512 00:08:35.361 verify_state_save=0 00:08:35.361 do_verify=1 00:08:35.361 verify=crc32c-intel 00:08:35.361 [job0] 00:08:35.361 filename=/dev/nvme0n1 00:08:35.361 Could not set queue depth (nvme0n1) 00:08:35.620 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:35.620 fio-3.35 00:08:35.620 Starting 1 thread 00:08:36.555 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:08:36.555 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:36.813 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:08:36.813 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:36.813 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:36.813 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:36.813 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:36.813 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:36.813 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:08:36.813 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:36.813 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:36.813 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:36.813 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:36.813 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:36.813 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:08:37.071 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:37.330 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:08:37.330 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:37.330 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:37.330 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:37.330 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:37.330 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:37.330 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:08:37.330 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:37.330 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:37.330 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:37.330 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:37.330 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:37.330 10:47:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 66534 00:08:42.598 00:08:42.598 job0: (groupid=0, jobs=1): err= 0: pid=66555: Thu Jul 25 10:47:11 2024 00:08:42.598 read: IOPS=9757, BW=38.1MiB/s (40.0MB/s)(229MiB/6003msec) 00:08:42.598 slat (usec): min=2, max=10046, avg=60.60, stdev=254.78 00:08:42.598 clat (usec): min=1225, max=21512, avg=9050.17, stdev=1863.56 00:08:42.598 lat (usec): min=1828, max=21523, avg=9110.76, stdev=1873.37 00:08:42.598 clat percentiles (usec): 00:08:42.598 | 1.00th=[ 4686], 5.00th=[ 6652], 10.00th=[ 7439], 20.00th=[ 7963], 00:08:42.598 | 30.00th=[ 8225], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 8979], 00:08:42.598 | 70.00th=[ 9372], 80.00th=[ 9896], 90.00th=[11600], 95.00th=[12780], 00:08:42.598 | 99.00th=[14746], 99.50th=[15926], 99.90th=[20055], 99.95th=[21365], 00:08:42.598 | 99.99th=[21365] 00:08:42.598 bw ( KiB/s): min= 4888, max=25516, per=51.19%, avg=19978.55, stdev=6361.05, samples=11 00:08:42.598 iops : min= 1222, max= 6379, avg=4994.64, stdev=1590.26, samples=11 00:08:42.598 write: IOPS=5758, BW=22.5MiB/s (23.6MB/s)(118MiB/5243msec); 0 zone resets 00:08:42.598 slat (usec): min=4, max=7684, avg=70.36, stdev=174.71 00:08:42.598 clat (usec): min=1387, max=17027, avg=7661.21, stdev=1496.05 00:08:42.598 lat (usec): min=1422, max=17068, avg=7731.56, stdev=1501.73 00:08:42.598 clat percentiles (usec): 00:08:42.598 | 1.00th=[ 3523], 5.00th=[ 4621], 10.00th=[ 5800], 20.00th=[ 6980], 00:08:42.598 | 30.00th=[ 7308], 40.00th=[ 7570], 50.00th=[ 7767], 60.00th=[ 7963], 00:08:42.598 | 70.00th=[ 8160], 80.00th=[ 8455], 90.00th=[ 8848], 95.00th=[ 9503], 00:08:42.598 | 99.00th=[12256], 99.50th=[13304], 99.90th=[15533], 99.95th=[16188], 00:08:42.598 | 99.99th=[16909] 00:08:42.598 bw ( KiB/s): min= 5136, max=26256, per=87.00%, avg=20039.82, stdev=6278.38, samples=11 00:08:42.598 iops : min= 1284, max= 6564, avg=5009.91, stdev=1569.56, samples=11 00:08:42.598 lat (msec) : 2=0.03%, 4=1.08%, 10=84.81%, 20=14.00%, 50=0.08% 00:08:42.598 cpu : usr=5.55%, sys=21.24%, ctx=5261, majf=0, minf=96 00:08:42.598 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:08:42.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:42.598 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:42.598 issued rwts: total=58574,30192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:42.598 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:42.598 00:08:42.598 Run status group 0 (all jobs): 00:08:42.598 READ: bw=38.1MiB/s (40.0MB/s), 38.1MiB/s-38.1MiB/s (40.0MB/s-40.0MB/s), io=229MiB (240MB), run=6003-6003msec 00:08:42.598 WRITE: bw=22.5MiB/s (23.6MB/s), 22.5MiB/s-22.5MiB/s (23.6MB/s-23.6MB/s), io=118MiB (124MB), run=5243-5243msec 00:08:42.598 00:08:42.598 Disk stats (read/write): 00:08:42.598 nvme0n1: ios=57638/29680, merge=0/0, ticks=499368/212714, in_queue=712082, util=98.73% 00:08:42.598 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:08:42.598 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:08:42.598 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:08:42.598 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:42.598 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:42.598 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:42.598 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:42.598 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:42.599 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:08:42.599 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:42.599 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:42.599 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:42.599 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:42.599 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:42.599 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:08:42.599 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:42.599 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=66629 00:08:42.599 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:08:42.599 [global] 00:08:42.599 thread=1 00:08:42.599 invalidate=1 00:08:42.599 rw=randrw 00:08:42.599 time_based=1 00:08:42.599 runtime=6 00:08:42.599 ioengine=libaio 00:08:42.599 direct=1 00:08:42.599 bs=4096 00:08:42.599 iodepth=128 00:08:42.599 norandommap=0 00:08:42.599 numjobs=1 00:08:42.599 00:08:42.599 verify_dump=1 00:08:42.599 verify_backlog=512 00:08:42.599 verify_state_save=0 00:08:42.599 do_verify=1 00:08:42.599 verify=crc32c-intel 00:08:42.599 [job0] 00:08:42.599 filename=/dev/nvme0n1 00:08:42.599 Could not set queue depth (nvme0n1) 00:08:42.599 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:42.599 fio-3.35 00:08:42.599 Starting 1 thread 00:08:43.165 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:08:43.423 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:43.681 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:08:43.681 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:43.681 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:43.681 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:43.681 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:43.681 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:43.681 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:08:43.681 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:43.681 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:43.681 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:43.681 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:43.681 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:43.681 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:08:43.938 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:44.204 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:08:44.204 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:44.204 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:44.204 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:44.204 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:44.204 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:44.204 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:08:44.204 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:44.204 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:44.204 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:44.204 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:44.204 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:44.204 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 66629 00:08:49.485 00:08:49.485 job0: (groupid=0, jobs=1): err= 0: pid=66654: Thu Jul 25 10:47:18 2024 00:08:49.485 read: IOPS=10.2k, BW=39.8MiB/s (41.8MB/s)(239MiB/6007msec) 00:08:49.485 slat (usec): min=6, max=6806, avg=50.10, stdev=214.83 00:08:49.485 clat (usec): min=510, max=18901, avg=8702.74, stdev=2021.56 00:08:49.485 lat (usec): min=525, max=18911, avg=8752.84, stdev=2029.02 00:08:49.485 clat percentiles (usec): 00:08:49.485 | 1.00th=[ 3556], 5.00th=[ 5080], 10.00th=[ 6194], 20.00th=[ 7635], 00:08:49.485 | 30.00th=[ 8160], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 8979], 00:08:49.485 | 70.00th=[ 9241], 80.00th=[ 9634], 90.00th=[10945], 95.00th=[12649], 00:08:49.485 | 99.00th=[14353], 99.50th=[15139], 99.90th=[16909], 99.95th=[17171], 00:08:49.485 | 99.99th=[18220] 00:08:49.485 bw ( KiB/s): min= 7640, max=30323, per=51.30%, avg=20919.55, stdev=6836.11, samples=11 00:08:49.485 iops : min= 1910, max= 7580, avg=5229.82, stdev=1708.92, samples=11 00:08:49.485 write: IOPS=5923, BW=23.1MiB/s (24.3MB/s)(123MiB/5325msec); 0 zone resets 00:08:49.485 slat (usec): min=12, max=1946, avg=59.01, stdev=146.63 00:08:49.485 clat (usec): min=1766, max=16002, avg=7286.09, stdev=1718.78 00:08:49.485 lat (usec): min=1812, max=16027, avg=7345.10, stdev=1729.55 00:08:49.485 clat percentiles (usec): 00:08:49.485 | 1.00th=[ 3195], 5.00th=[ 3949], 10.00th=[ 4555], 20.00th=[ 5800], 00:08:49.485 | 30.00th=[ 6849], 40.00th=[ 7373], 50.00th=[ 7701], 60.00th=[ 7963], 00:08:49.485 | 70.00th=[ 8225], 80.00th=[ 8455], 90.00th=[ 8848], 95.00th=[ 9241], 00:08:49.485 | 99.00th=[11863], 99.50th=[12649], 99.90th=[14091], 99.95th=[14484], 00:08:49.485 | 99.99th=[15139] 00:08:49.485 bw ( KiB/s): min= 8088, max=31473, per=88.61%, avg=20996.45, stdev=6749.02, samples=11 00:08:49.485 iops : min= 2022, max= 7868, avg=5249.09, stdev=1687.22, samples=11 00:08:49.485 lat (usec) : 750=0.01%, 1000=0.02% 00:08:49.485 lat (msec) : 2=0.17%, 4=2.73%, 10=86.26%, 20=10.81% 00:08:49.485 cpu : usr=5.76%, sys=21.28%, ctx=5261, majf=0, minf=84 00:08:49.485 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:08:49.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:49.485 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:49.485 issued rwts: total=61233,31542,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:49.485 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:49.485 00:08:49.485 Run status group 0 (all jobs): 00:08:49.485 READ: bw=39.8MiB/s (41.8MB/s), 39.8MiB/s-39.8MiB/s (41.8MB/s-41.8MB/s), io=239MiB (251MB), run=6007-6007msec 00:08:49.485 WRITE: bw=23.1MiB/s (24.3MB/s), 23.1MiB/s-23.1MiB/s (24.3MB/s-24.3MB/s), io=123MiB (129MB), run=5325-5325msec 00:08:49.485 00:08:49.485 Disk stats (read/write): 00:08:49.485 nvme0n1: ios=60405/31028, merge=0/0, ticks=504280/212450, in_queue=716730, util=98.71% 00:08:49.485 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:49.485 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:49.485 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:49.485 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:08:49.485 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:49.485 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:49.485 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:49.485 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:49.485 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:08:49.485 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:49.485 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:08:49.485 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:08:49.485 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:08:49.485 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:08:49.485 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:49.485 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:08:49.485 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:49.485 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:08:49.485 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:49.485 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:49.485 rmmod nvme_tcp 00:08:49.485 rmmod nvme_fabrics 00:08:49.485 rmmod nvme_keyring 00:08:49.485 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:49.486 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:08:49.486 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:08:49.486 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 66446 ']' 00:08:49.486 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 66446 00:08:49.486 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@950 -- # '[' -z 66446 ']' 00:08:49.486 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # kill -0 66446 00:08:49.486 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # uname 00:08:49.486 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:49.486 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66446 00:08:49.486 killing process with pid 66446 00:08:49.486 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:49.486 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:49.486 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66446' 00:08:49.486 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@969 -- # kill 66446 00:08:49.486 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@974 -- # wait 66446 00:08:49.486 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:49.486 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:49.486 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:49.486 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:49.486 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:49.486 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.486 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:49.486 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:49.486 ************************************ 00:08:49.486 END TEST nvmf_target_multipath 00:08:49.486 ************************************ 00:08:49.486 00:08:49.486 real 0m19.051s 00:08:49.486 user 1m11.952s 00:08:49.486 sys 0m8.637s 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:49.486 ************************************ 00:08:49.486 START TEST nvmf_zcopy 00:08:49.486 ************************************ 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:49.486 * Looking for test storage... 00:08:49.486 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=bb4b8bd3-cfb4-4368-bf29-91254747069c 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:49.486 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:49.487 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:49.487 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:49.487 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:49.487 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:49.487 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:49.487 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:49.487 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:49.487 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:49.487 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:49.487 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:49.487 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:49.487 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:49.487 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:49.487 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:49.487 Cannot find device "nvmf_tgt_br" 00:08:49.487 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:08:49.487 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:49.487 Cannot find device "nvmf_tgt_br2" 00:08:49.487 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:08:49.487 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:49.487 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:49.487 Cannot find device "nvmf_tgt_br" 00:08:49.487 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:08:49.746 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:49.746 Cannot find device "nvmf_tgt_br2" 00:08:49.746 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:08:49.746 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:49.746 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:49.746 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:49.746 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:49.746 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:08:49.746 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:49.746 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:49.746 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:08:49.746 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:49.746 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:49.746 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:49.746 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:49.746 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:49.746 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:49.746 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:49.746 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:49.746 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:49.746 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:49.746 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:49.746 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:49.746 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:49.746 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:49.746 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:49.746 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:49.746 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:49.746 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:49.746 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:49.746 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:49.746 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:49.746 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:50.005 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:50.005 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:50.005 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:50.005 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:08:50.005 00:08:50.005 --- 10.0.0.2 ping statistics --- 00:08:50.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.005 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:08:50.005 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:50.005 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:50.005 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:08:50.005 00:08:50.005 --- 10.0.0.3 ping statistics --- 00:08:50.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.005 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:08:50.005 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:50.005 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:50.005 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:08:50.005 00:08:50.005 --- 10.0.0.1 ping statistics --- 00:08:50.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.005 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:08:50.005 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:50.005 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:08:50.005 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:50.005 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:50.005 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:50.005 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:50.005 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:50.005 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:50.005 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:50.005 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:50.005 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:50.005 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:50.005 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:50.005 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=66903 00:08:50.005 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:50.005 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 66903 00:08:50.005 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 66903 ']' 00:08:50.005 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.005 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:50.005 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.005 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:50.005 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:50.005 [2024-07-25 10:47:19.574307] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:50.005 [2024-07-25 10:47:19.574393] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:50.005 [2024-07-25 10:47:19.713791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.265 [2024-07-25 10:47:19.842467] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:50.265 [2024-07-25 10:47:19.842545] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:50.265 [2024-07-25 10:47:19.842559] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:50.265 [2024-07-25 10:47:19.842570] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:50.265 [2024-07-25 10:47:19.842580] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:50.265 [2024-07-25 10:47:19.842625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.265 [2024-07-25 10:47:19.904029] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:51.200 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:51.200 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:08:51.200 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:51.200 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:51.200 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:51.200 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:51.201 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:51.201 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:51.201 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.201 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:51.201 [2024-07-25 10:47:20.663977] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:51.201 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.201 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:51.201 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.201 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:51.201 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.201 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:51.201 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.201 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:51.201 [2024-07-25 10:47:20.680314] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:51.201 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.201 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:51.201 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.201 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:51.201 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.201 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:51.201 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.201 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:51.201 malloc0 00:08:51.201 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.201 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:51.201 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.201 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:51.201 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.201 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:51.201 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:51.201 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:08:51.201 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:08:51.201 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:51.201 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:51.201 { 00:08:51.201 "params": { 00:08:51.201 "name": "Nvme$subsystem", 00:08:51.201 "trtype": "$TEST_TRANSPORT", 00:08:51.201 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:51.201 "adrfam": "ipv4", 00:08:51.201 "trsvcid": "$NVMF_PORT", 00:08:51.201 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:51.201 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:51.201 "hdgst": ${hdgst:-false}, 00:08:51.201 "ddgst": ${ddgst:-false} 00:08:51.201 }, 00:08:51.201 "method": "bdev_nvme_attach_controller" 00:08:51.201 } 00:08:51.201 EOF 00:08:51.201 )") 00:08:51.201 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:08:51.201 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:08:51.201 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:08:51.201 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:51.201 "params": { 00:08:51.201 "name": "Nvme1", 00:08:51.201 "trtype": "tcp", 00:08:51.201 "traddr": "10.0.0.2", 00:08:51.201 "adrfam": "ipv4", 00:08:51.201 "trsvcid": "4420", 00:08:51.201 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:51.201 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:51.201 "hdgst": false, 00:08:51.201 "ddgst": false 00:08:51.201 }, 00:08:51.201 "method": "bdev_nvme_attach_controller" 00:08:51.201 }' 00:08:51.201 [2024-07-25 10:47:20.768309] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:51.201 [2024-07-25 10:47:20.768372] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66941 ] 00:08:51.201 [2024-07-25 10:47:20.905038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.460 [2024-07-25 10:47:21.053566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.460 [2024-07-25 10:47:21.137832] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:51.718 Running I/O for 10 seconds... 00:09:01.694 00:09:01.694 Latency(us) 00:09:01.694 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:01.694 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:01.694 Verification LBA range: start 0x0 length 0x1000 00:09:01.694 Nvme1n1 : 10.02 5906.63 46.15 0.00 0.00 21605.23 2695.91 32648.84 00:09:01.694 =================================================================================================================== 00:09:01.694 Total : 5906.63 46.15 0.00 0.00 21605.23 2695.91 32648.84 00:09:01.953 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=67057 00:09:01.953 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:01.953 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:01.953 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:01.953 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:01.953 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:01.953 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:01.953 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:01.953 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:01.953 { 00:09:01.953 "params": { 00:09:01.953 "name": "Nvme$subsystem", 00:09:01.953 "trtype": "$TEST_TRANSPORT", 00:09:01.953 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:01.953 "adrfam": "ipv4", 00:09:01.953 "trsvcid": "$NVMF_PORT", 00:09:01.953 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:01.953 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:01.953 "hdgst": ${hdgst:-false}, 00:09:01.953 "ddgst": ${ddgst:-false} 00:09:01.953 }, 00:09:01.953 "method": "bdev_nvme_attach_controller" 00:09:01.953 } 00:09:01.953 EOF 00:09:01.953 )") 00:09:01.953 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:01.953 [2024-07-25 10:47:31.511194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.953 [2024-07-25 10:47:31.511251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.953 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:01.953 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:01.953 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:01.953 "params": { 00:09:01.953 "name": "Nvme1", 00:09:01.953 "trtype": "tcp", 00:09:01.953 "traddr": "10.0.0.2", 00:09:01.953 "adrfam": "ipv4", 00:09:01.953 "trsvcid": "4420", 00:09:01.953 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:01.953 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:01.953 "hdgst": false, 00:09:01.953 "ddgst": false 00:09:01.953 }, 00:09:01.953 "method": "bdev_nvme_attach_controller" 00:09:01.953 }' 00:09:01.953 [2024-07-25 10:47:31.523123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.953 [2024-07-25 10:47:31.523148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.953 [2024-07-25 10:47:31.535126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.953 [2024-07-25 10:47:31.535151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.953 [2024-07-25 10:47:31.547126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.953 [2024-07-25 10:47:31.547149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.953 [2024-07-25 10:47:31.559139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.953 [2024-07-25 10:47:31.559162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.953 [2024-07-25 10:47:31.560941] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:01.953 [2024-07-25 10:47:31.561022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67057 ] 00:09:01.953 [2024-07-25 10:47:31.571138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.953 [2024-07-25 10:47:31.571161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.953 [2024-07-25 10:47:31.583137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.953 [2024-07-25 10:47:31.583160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.953 [2024-07-25 10:47:31.595139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.953 [2024-07-25 10:47:31.595162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.953 [2024-07-25 10:47:31.607145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.953 [2024-07-25 10:47:31.607167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.953 [2024-07-25 10:47:31.619169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.953 [2024-07-25 10:47:31.619192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.953 [2024-07-25 10:47:31.631150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.953 [2024-07-25 10:47:31.631173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.953 [2024-07-25 10:47:31.643175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.953 [2024-07-25 10:47:31.643199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.953 [2024-07-25 10:47:31.655156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.953 [2024-07-25 10:47:31.655178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.953 [2024-07-25 10:47:31.667158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.953 [2024-07-25 10:47:31.667180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.953 [2024-07-25 10:47:31.679164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.953 [2024-07-25 10:47:31.679186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.212 [2024-07-25 10:47:31.691167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.212 [2024-07-25 10:47:31.691189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.212 [2024-07-25 10:47:31.698264] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.212 [2024-07-25 10:47:31.703166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.212 [2024-07-25 10:47:31.703192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.212 [2024-07-25 10:47:31.715167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.212 [2024-07-25 10:47:31.715188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.212 [2024-07-25 10:47:31.727173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.212 [2024-07-25 10:47:31.727195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.212 [2024-07-25 10:47:31.739176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.212 [2024-07-25 10:47:31.739199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.212 [2024-07-25 10:47:31.751179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.212 [2024-07-25 10:47:31.751202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.212 [2024-07-25 10:47:31.763180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.212 [2024-07-25 10:47:31.763212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.212 [2024-07-25 10:47:31.775187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.212 [2024-07-25 10:47:31.775208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.212 [2024-07-25 10:47:31.787190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.212 [2024-07-25 10:47:31.787212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.212 [2024-07-25 10:47:31.799193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.212 [2024-07-25 10:47:31.799215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.212 [2024-07-25 10:47:31.803186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.212 [2024-07-25 10:47:31.811197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.212 [2024-07-25 10:47:31.811219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.212 [2024-07-25 10:47:31.823227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.212 [2024-07-25 10:47:31.823257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.212 [2024-07-25 10:47:31.835216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.212 [2024-07-25 10:47:31.835239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.212 [2024-07-25 10:47:31.847222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.212 [2024-07-25 10:47:31.847244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.212 [2024-07-25 10:47:31.859225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.212 [2024-07-25 10:47:31.859246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.212 [2024-07-25 10:47:31.865985] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:02.212 [2024-07-25 10:47:31.871239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.212 [2024-07-25 10:47:31.871262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.212 [2024-07-25 10:47:31.883229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.212 [2024-07-25 10:47:31.883251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.212 [2024-07-25 10:47:31.895235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.212 [2024-07-25 10:47:31.895257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.213 [2024-07-25 10:47:31.907278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.213 [2024-07-25 10:47:31.907315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.213 [2024-07-25 10:47:31.919265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.213 [2024-07-25 10:47:31.919292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.213 [2024-07-25 10:47:31.931273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.213 [2024-07-25 10:47:31.931300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.213 [2024-07-25 10:47:31.943331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.213 [2024-07-25 10:47:31.943356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.536 [2024-07-25 10:47:31.955281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.536 [2024-07-25 10:47:31.955305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.536 [2024-07-25 10:47:31.967287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.536 [2024-07-25 10:47:31.967312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.536 [2024-07-25 10:47:31.979314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.536 [2024-07-25 10:47:31.979341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.536 Running I/O for 5 seconds... 00:09:02.536 [2024-07-25 10:47:31.991324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.536 [2024-07-25 10:47:31.991347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.536 [2024-07-25 10:47:32.007145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.536 [2024-07-25 10:47:32.007174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.536 [2024-07-25 10:47:32.022264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.536 [2024-07-25 10:47:32.022295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.536 [2024-07-25 10:47:32.033830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.536 [2024-07-25 10:47:32.033890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.536 [2024-07-25 10:47:32.050096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.536 [2024-07-25 10:47:32.050125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.536 [2024-07-25 10:47:32.066738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.536 [2024-07-25 10:47:32.066765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.536 [2024-07-25 10:47:32.083739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.536 [2024-07-25 10:47:32.083766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.536 [2024-07-25 10:47:32.100276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.536 [2024-07-25 10:47:32.100303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.536 [2024-07-25 10:47:32.116829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.536 [2024-07-25 10:47:32.116892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.536 [2024-07-25 10:47:32.134103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.536 [2024-07-25 10:47:32.134133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.536 [2024-07-25 10:47:32.148546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.536 [2024-07-25 10:47:32.148604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.536 [2024-07-25 10:47:32.165488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.536 [2024-07-25 10:47:32.165516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.536 [2024-07-25 10:47:32.181187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.536 [2024-07-25 10:47:32.181215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.536 [2024-07-25 10:47:32.199240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.536 [2024-07-25 10:47:32.199271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.536 [2024-07-25 10:47:32.213662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.536 [2024-07-25 10:47:32.213692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.536 [2024-07-25 10:47:32.231175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.536 [2024-07-25 10:47:32.231205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.814 [2024-07-25 10:47:32.244646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.814 [2024-07-25 10:47:32.244691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.814 [2024-07-25 10:47:32.261363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.814 [2024-07-25 10:47:32.261390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.814 [2024-07-25 10:47:32.277265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.814 [2024-07-25 10:47:32.277294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.814 [2024-07-25 10:47:32.293395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.814 [2024-07-25 10:47:32.293422] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.814 [2024-07-25 10:47:32.311454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.814 [2024-07-25 10:47:32.311481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.814 [2024-07-25 10:47:32.327414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.814 [2024-07-25 10:47:32.327440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.814 [2024-07-25 10:47:32.344539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.814 [2024-07-25 10:47:32.344567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.814 [2024-07-25 10:47:32.360705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.814 [2024-07-25 10:47:32.360732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.814 [2024-07-25 10:47:32.378124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.814 [2024-07-25 10:47:32.378152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.814 [2024-07-25 10:47:32.394521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.814 [2024-07-25 10:47:32.394562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.814 [2024-07-25 10:47:32.411333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.814 [2024-07-25 10:47:32.411361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.815 [2024-07-25 10:47:32.429110] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.815 [2024-07-25 10:47:32.429140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.815 [2024-07-25 10:47:32.443439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.815 [2024-07-25 10:47:32.443467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.815 [2024-07-25 10:47:32.457780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.815 [2024-07-25 10:47:32.457807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.815 [2024-07-25 10:47:32.474034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.815 [2024-07-25 10:47:32.474083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.815 [2024-07-25 10:47:32.488759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.815 [2024-07-25 10:47:32.488786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.815 [2024-07-25 10:47:32.503108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.815 [2024-07-25 10:47:32.503134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.815 [2024-07-25 10:47:32.519663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.815 [2024-07-25 10:47:32.519691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.815 [2024-07-25 10:47:32.535494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.815 [2024-07-25 10:47:32.535533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.074 [2024-07-25 10:47:32.554426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.074 [2024-07-25 10:47:32.554456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.074 [2024-07-25 10:47:32.568162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.074 [2024-07-25 10:47:32.568189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.074 [2024-07-25 10:47:32.584106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.074 [2024-07-25 10:47:32.584132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.074 [2024-07-25 10:47:32.599683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.074 [2024-07-25 10:47:32.599710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.074 [2024-07-25 10:47:32.616575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.074 [2024-07-25 10:47:32.616602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.074 [2024-07-25 10:47:32.633595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.074 [2024-07-25 10:47:32.633625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.074 [2024-07-25 10:47:32.650981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.074 [2024-07-25 10:47:32.651008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.074 [2024-07-25 10:47:32.667391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.074 [2024-07-25 10:47:32.667420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.074 [2024-07-25 10:47:32.683928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.074 [2024-07-25 10:47:32.683973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.074 [2024-07-25 10:47:32.699962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.074 [2024-07-25 10:47:32.699989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.074 [2024-07-25 10:47:32.716305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.074 [2024-07-25 10:47:32.716333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.074 [2024-07-25 10:47:32.734627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.074 [2024-07-25 10:47:32.734657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.074 [2024-07-25 10:47:32.749271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.074 [2024-07-25 10:47:32.749315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.074 [2024-07-25 10:47:32.765801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.074 [2024-07-25 10:47:32.765830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.074 [2024-07-25 10:47:32.783102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.074 [2024-07-25 10:47:32.783131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.074 [2024-07-25 10:47:32.799320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.074 [2024-07-25 10:47:32.799365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.334 [2024-07-25 10:47:32.816923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.334 [2024-07-25 10:47:32.816958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.334 [2024-07-25 10:47:32.832830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.334 [2024-07-25 10:47:32.832894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.334 [2024-07-25 10:47:32.848613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.334 [2024-07-25 10:47:32.848643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.334 [2024-07-25 10:47:32.858171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.334 [2024-07-25 10:47:32.858200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.334 [2024-07-25 10:47:32.874386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.334 [2024-07-25 10:47:32.874415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.334 [2024-07-25 10:47:32.884456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.334 [2024-07-25 10:47:32.884486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.334 [2024-07-25 10:47:32.900572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.334 [2024-07-25 10:47:32.900599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.334 [2024-07-25 10:47:32.916659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.334 [2024-07-25 10:47:32.916687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.334 [2024-07-25 10:47:32.933410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.334 [2024-07-25 10:47:32.933437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.334 [2024-07-25 10:47:32.950570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.334 [2024-07-25 10:47:32.950601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.334 [2024-07-25 10:47:32.967482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.334 [2024-07-25 10:47:32.967511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.334 [2024-07-25 10:47:32.983864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.334 [2024-07-25 10:47:32.983975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.334 [2024-07-25 10:47:33.000782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.334 [2024-07-25 10:47:33.000811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.334 [2024-07-25 10:47:33.019893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.334 [2024-07-25 10:47:33.019939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.334 [2024-07-25 10:47:33.035327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.334 [2024-07-25 10:47:33.035358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.334 [2024-07-25 10:47:33.051879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.334 [2024-07-25 10:47:33.051906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.334 [2024-07-25 10:47:33.068652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.334 [2024-07-25 10:47:33.068679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.594 [2024-07-25 10:47:33.085057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.594 [2024-07-25 10:47:33.085084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.594 [2024-07-25 10:47:33.103740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.594 [2024-07-25 10:47:33.103766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.594 [2024-07-25 10:47:33.117896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.594 [2024-07-25 10:47:33.117921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.594 [2024-07-25 10:47:33.134220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.594 [2024-07-25 10:47:33.134248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.594 [2024-07-25 10:47:33.151362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.594 [2024-07-25 10:47:33.151390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.594 [2024-07-25 10:47:33.166221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.594 [2024-07-25 10:47:33.166248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.594 [2024-07-25 10:47:33.183134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.594 [2024-07-25 10:47:33.183162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.594 [2024-07-25 10:47:33.197021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.594 [2024-07-25 10:47:33.197046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.594 [2024-07-25 10:47:33.214332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.594 [2024-07-25 10:47:33.214362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.594 [2024-07-25 10:47:33.229441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.594 [2024-07-25 10:47:33.229468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.594 [2024-07-25 10:47:33.246713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.594 [2024-07-25 10:47:33.246740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.594 [2024-07-25 10:47:33.261986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.594 [2024-07-25 10:47:33.262012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.594 [2024-07-25 10:47:33.278395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.594 [2024-07-25 10:47:33.278437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.594 [2024-07-25 10:47:33.292818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.594 [2024-07-25 10:47:33.292845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.594 [2024-07-25 10:47:33.308282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.594 [2024-07-25 10:47:33.308310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.594 [2024-07-25 10:47:33.326271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.594 [2024-07-25 10:47:33.326299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.853 [2024-07-25 10:47:33.340741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.853 [2024-07-25 10:47:33.340768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.853 [2024-07-25 10:47:33.356715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.853 [2024-07-25 10:47:33.356743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.853 [2024-07-25 10:47:33.372340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.853 [2024-07-25 10:47:33.372366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.853 [2024-07-25 10:47:33.389785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.853 [2024-07-25 10:47:33.389812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.853 [2024-07-25 10:47:33.406822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.853 [2024-07-25 10:47:33.406867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.853 [2024-07-25 10:47:33.421198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.853 [2024-07-25 10:47:33.421225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.853 [2024-07-25 10:47:33.436412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.853 [2024-07-25 10:47:33.436438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.853 [2024-07-25 10:47:33.448058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.853 [2024-07-25 10:47:33.448085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.853 [2024-07-25 10:47:33.463394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.853 [2024-07-25 10:47:33.463429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.853 [2024-07-25 10:47:33.481575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.853 [2024-07-25 10:47:33.481602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.853 [2024-07-25 10:47:33.495980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.853 [2024-07-25 10:47:33.496007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.853 [2024-07-25 10:47:33.512643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.853 [2024-07-25 10:47:33.512669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.853 [2024-07-25 10:47:33.528772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.853 [2024-07-25 10:47:33.528801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.853 [2024-07-25 10:47:33.546825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.853 [2024-07-25 10:47:33.546882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.853 [2024-07-25 10:47:33.562097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.853 [2024-07-25 10:47:33.562130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.853 [2024-07-25 10:47:33.573448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.853 [2024-07-25 10:47:33.573476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.853 [2024-07-25 10:47:33.590453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.853 [2024-07-25 10:47:33.590483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.111 [2024-07-25 10:47:33.605014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.111 [2024-07-25 10:47:33.605042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.111 [2024-07-25 10:47:33.620438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.111 [2024-07-25 10:47:33.620482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.111 [2024-07-25 10:47:33.637733] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.111 [2024-07-25 10:47:33.637760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.111 [2024-07-25 10:47:33.653695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.111 [2024-07-25 10:47:33.653721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.111 [2024-07-25 10:47:33.672654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.111 [2024-07-25 10:47:33.672681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.111 [2024-07-25 10:47:33.686553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.111 [2024-07-25 10:47:33.686580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.111 [2024-07-25 10:47:33.702992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.111 [2024-07-25 10:47:33.703019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.111 [2024-07-25 10:47:33.720115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.111 [2024-07-25 10:47:33.720143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.111 [2024-07-25 10:47:33.735067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.111 [2024-07-25 10:47:33.735094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.111 [2024-07-25 10:47:33.745167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.111 [2024-07-25 10:47:33.745209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.111 [2024-07-25 10:47:33.759438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.111 [2024-07-25 10:47:33.759466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.111 [2024-07-25 10:47:33.774897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.111 [2024-07-25 10:47:33.774935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.111 [2024-07-25 10:47:33.791266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.111 [2024-07-25 10:47:33.791293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.111 [2024-07-25 10:47:33.808758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.111 [2024-07-25 10:47:33.808785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.112 [2024-07-25 10:47:33.823365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.112 [2024-07-25 10:47:33.823392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.112 [2024-07-25 10:47:33.839763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.112 [2024-07-25 10:47:33.839789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.370 [2024-07-25 10:47:33.856760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.370 [2024-07-25 10:47:33.856787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.370 [2024-07-25 10:47:33.874270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.370 [2024-07-25 10:47:33.874297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.370 [2024-07-25 10:47:33.889068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.370 [2024-07-25 10:47:33.889095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.370 [2024-07-25 10:47:33.904914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.370 [2024-07-25 10:47:33.904941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.370 [2024-07-25 10:47:33.921632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.370 [2024-07-25 10:47:33.921659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.370 [2024-07-25 10:47:33.938213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.370 [2024-07-25 10:47:33.938242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.370 [2024-07-25 10:47:33.954155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.370 [2024-07-25 10:47:33.954183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.370 [2024-07-25 10:47:33.965625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.370 [2024-07-25 10:47:33.965652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.370 [2024-07-25 10:47:33.981811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.370 [2024-07-25 10:47:33.981840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.370 [2024-07-25 10:47:33.998560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.370 [2024-07-25 10:47:33.998588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.370 [2024-07-25 10:47:34.014582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.370 [2024-07-25 10:47:34.014609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.370 [2024-07-25 10:47:34.031724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.370 [2024-07-25 10:47:34.031755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.370 [2024-07-25 10:47:34.046617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.370 [2024-07-25 10:47:34.046645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.370 [2024-07-25 10:47:34.061301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.370 [2024-07-25 10:47:34.061330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.370 [2024-07-25 10:47:34.076673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.370 [2024-07-25 10:47:34.076702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.370 [2024-07-25 10:47:34.094594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.370 [2024-07-25 10:47:34.094624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.629 [2024-07-25 10:47:34.109393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.629 [2024-07-25 10:47:34.109421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.629 [2024-07-25 10:47:34.126064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.629 [2024-07-25 10:47:34.126091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.629 [2024-07-25 10:47:34.141775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.629 [2024-07-25 10:47:34.141802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.629 [2024-07-25 10:47:34.159768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.629 [2024-07-25 10:47:34.159795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.629 [2024-07-25 10:47:34.174330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.629 [2024-07-25 10:47:34.174360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.629 [2024-07-25 10:47:34.189626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.629 [2024-07-25 10:47:34.189654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.629 [2024-07-25 10:47:34.207229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.629 [2024-07-25 10:47:34.207258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.629 [2024-07-25 10:47:34.223841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.629 [2024-07-25 10:47:34.223877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.629 [2024-07-25 10:47:34.239008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.629 [2024-07-25 10:47:34.239035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.629 [2024-07-25 10:47:34.256291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.629 [2024-07-25 10:47:34.256325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.629 [2024-07-25 10:47:34.270803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.629 [2024-07-25 10:47:34.270831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.629 [2024-07-25 10:47:34.281778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.629 [2024-07-25 10:47:34.281804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.629 [2024-07-25 10:47:34.297445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.629 [2024-07-25 10:47:34.297473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.629 [2024-07-25 10:47:34.315849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.629 [2024-07-25 10:47:34.315932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.629 [2024-07-25 10:47:34.329647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.629 [2024-07-25 10:47:34.329675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.629 [2024-07-25 10:47:34.346494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.629 [2024-07-25 10:47:34.346523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.629 [2024-07-25 10:47:34.361159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.629 [2024-07-25 10:47:34.361187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.888 [2024-07-25 10:47:34.376291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.888 [2024-07-25 10:47:34.376319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.888 [2024-07-25 10:47:34.393367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.888 [2024-07-25 10:47:34.393394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.888 [2024-07-25 10:47:34.407936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.888 [2024-07-25 10:47:34.407976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.888 [2024-07-25 10:47:34.425176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.888 [2024-07-25 10:47:34.425203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.888 [2024-07-25 10:47:34.441789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.888 [2024-07-25 10:47:34.441818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.888 [2024-07-25 10:47:34.456649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.888 [2024-07-25 10:47:34.456677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.888 [2024-07-25 10:47:34.472703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.888 [2024-07-25 10:47:34.472730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.888 [2024-07-25 10:47:34.489176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.888 [2024-07-25 10:47:34.489203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.888 [2024-07-25 10:47:34.505524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.888 [2024-07-25 10:47:34.505552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.888 [2024-07-25 10:47:34.521790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.888 [2024-07-25 10:47:34.521818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.888 [2024-07-25 10:47:34.539445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.888 [2024-07-25 10:47:34.539473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.888 [2024-07-25 10:47:34.554506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.888 [2024-07-25 10:47:34.554549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.888 [2024-07-25 10:47:34.564021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.888 [2024-07-25 10:47:34.564049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.888 [2024-07-25 10:47:34.580421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.889 [2024-07-25 10:47:34.580450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.889 [2024-07-25 10:47:34.589918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.889 [2024-07-25 10:47:34.589944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.889 [2024-07-25 10:47:34.605311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.889 [2024-07-25 10:47:34.605339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.889 [2024-07-25 10:47:34.622720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.889 [2024-07-25 10:47:34.622749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.148 [2024-07-25 10:47:34.636603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.148 [2024-07-25 10:47:34.636630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.148 [2024-07-25 10:47:34.652520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.148 [2024-07-25 10:47:34.652549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.148 [2024-07-25 10:47:34.668930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.148 [2024-07-25 10:47:34.668964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.148 [2024-07-25 10:47:34.678023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.148 [2024-07-25 10:47:34.678073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.148 [2024-07-25 10:47:34.694351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.148 [2024-07-25 10:47:34.694382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.148 [2024-07-25 10:47:34.705494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.148 [2024-07-25 10:47:34.705522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.148 [2024-07-25 10:47:34.719985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.148 [2024-07-25 10:47:34.720011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.148 [2024-07-25 10:47:34.737493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.148 [2024-07-25 10:47:34.737520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.148 [2024-07-25 10:47:34.752995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.148 [2024-07-25 10:47:34.753022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.148 [2024-07-25 10:47:34.771910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.148 [2024-07-25 10:47:34.771946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.148 [2024-07-25 10:47:34.785741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.148 [2024-07-25 10:47:34.785769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.148 [2024-07-25 10:47:34.801388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.148 [2024-07-25 10:47:34.801417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.148 [2024-07-25 10:47:34.817707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.148 [2024-07-25 10:47:34.817735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.148 [2024-07-25 10:47:34.834668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.148 [2024-07-25 10:47:34.834696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.148 [2024-07-25 10:47:34.851045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.148 [2024-07-25 10:47:34.851073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.148 [2024-07-25 10:47:34.869991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.148 [2024-07-25 10:47:34.870018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.148 [2024-07-25 10:47:34.885232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.148 [2024-07-25 10:47:34.885260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.407 [2024-07-25 10:47:34.901724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.407 [2024-07-25 10:47:34.901751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.407 [2024-07-25 10:47:34.919260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.407 [2024-07-25 10:47:34.919288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.407 [2024-07-25 10:47:34.935802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.407 [2024-07-25 10:47:34.935829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.407 [2024-07-25 10:47:34.952185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.407 [2024-07-25 10:47:34.952213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.407 [2024-07-25 10:47:34.969279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.407 [2024-07-25 10:47:34.969306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.407 [2024-07-25 10:47:34.985421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.407 [2024-07-25 10:47:34.985448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.407 [2024-07-25 10:47:35.003287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.407 [2024-07-25 10:47:35.003315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.407 [2024-07-25 10:47:35.018338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.407 [2024-07-25 10:47:35.018398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.407 [2024-07-25 10:47:35.028750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.407 [2024-07-25 10:47:35.028795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.407 [2024-07-25 10:47:35.043619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.407 [2024-07-25 10:47:35.043650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.407 [2024-07-25 10:47:35.061209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.407 [2024-07-25 10:47:35.061251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.407 [2024-07-25 10:47:35.078384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.407 [2024-07-25 10:47:35.078411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.407 [2024-07-25 10:47:35.093924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.407 [2024-07-25 10:47:35.093950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.407 [2024-07-25 10:47:35.110830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.407 [2024-07-25 10:47:35.110873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.407 [2024-07-25 10:47:35.128718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.407 [2024-07-25 10:47:35.128745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.666 [2024-07-25 10:47:35.145138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.666 [2024-07-25 10:47:35.145177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.666 [2024-07-25 10:47:35.160944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.666 [2024-07-25 10:47:35.160971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.666 [2024-07-25 10:47:35.172371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.666 [2024-07-25 10:47:35.172399] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.666 [2024-07-25 10:47:35.188726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.666 [2024-07-25 10:47:35.188775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.666 [2024-07-25 10:47:35.205705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.666 [2024-07-25 10:47:35.205751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.666 [2024-07-25 10:47:35.221830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.666 [2024-07-25 10:47:35.221885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.666 [2024-07-25 10:47:35.239893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.666 [2024-07-25 10:47:35.239948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.666 [2024-07-25 10:47:35.254947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.666 [2024-07-25 10:47:35.254990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.666 [2024-07-25 10:47:35.270446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.666 [2024-07-25 10:47:35.270491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.666 [2024-07-25 10:47:35.288515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.666 [2024-07-25 10:47:35.288560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.666 [2024-07-25 10:47:35.304176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.666 [2024-07-25 10:47:35.304220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.666 [2024-07-25 10:47:35.322678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.666 [2024-07-25 10:47:35.322721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.666 [2024-07-25 10:47:35.336883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.666 [2024-07-25 10:47:35.336927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.666 [2024-07-25 10:47:35.353487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.667 [2024-07-25 10:47:35.353531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.667 [2024-07-25 10:47:35.368870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.667 [2024-07-25 10:47:35.368913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.667 [2024-07-25 10:47:35.381050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.667 [2024-07-25 10:47:35.381095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.667 [2024-07-25 10:47:35.397128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.667 [2024-07-25 10:47:35.397173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.925 [2024-07-25 10:47:35.412350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.925 [2024-07-25 10:47:35.412394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.925 [2024-07-25 10:47:35.429096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.925 [2024-07-25 10:47:35.429157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.925 [2024-07-25 10:47:35.444806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.925 [2024-07-25 10:47:35.444851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.925 [2024-07-25 10:47:35.456094] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.925 [2024-07-25 10:47:35.456138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.925 [2024-07-25 10:47:35.473533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.925 [2024-07-25 10:47:35.473577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.925 [2024-07-25 10:47:35.488348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.926 [2024-07-25 10:47:35.488405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.926 [2024-07-25 10:47:35.498275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.926 [2024-07-25 10:47:35.498307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.926 [2024-07-25 10:47:35.513717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.926 [2024-07-25 10:47:35.513761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.926 [2024-07-25 10:47:35.530019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.926 [2024-07-25 10:47:35.530087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.926 [2024-07-25 10:47:35.546355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.926 [2024-07-25 10:47:35.546423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.926 [2024-07-25 10:47:35.562304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.926 [2024-07-25 10:47:35.562335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.926 [2024-07-25 10:47:35.573274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.926 [2024-07-25 10:47:35.573318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.926 [2024-07-25 10:47:35.588784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.926 [2024-07-25 10:47:35.588829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.926 [2024-07-25 10:47:35.606228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.926 [2024-07-25 10:47:35.606260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.926 [2024-07-25 10:47:35.621624] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.926 [2024-07-25 10:47:35.621669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.926 [2024-07-25 10:47:35.638572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.926 [2024-07-25 10:47:35.638615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.926 [2024-07-25 10:47:35.656442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.926 [2024-07-25 10:47:35.656488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.184 [2024-07-25 10:47:35.671784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.184 [2024-07-25 10:47:35.671828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.184 [2024-07-25 10:47:35.686776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.184 [2024-07-25 10:47:35.686821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.184 [2024-07-25 10:47:35.701737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.184 [2024-07-25 10:47:35.701781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.184 [2024-07-25 10:47:35.718426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.184 [2024-07-25 10:47:35.718471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.184 [2024-07-25 10:47:35.734700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.185 [2024-07-25 10:47:35.734744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.185 [2024-07-25 10:47:35.751595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.185 [2024-07-25 10:47:35.751640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.185 [2024-07-25 10:47:35.766612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.185 [2024-07-25 10:47:35.766663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.185 [2024-07-25 10:47:35.781744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.185 [2024-07-25 10:47:35.781775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.185 [2024-07-25 10:47:35.792518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.185 [2024-07-25 10:47:35.792549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.185 [2024-07-25 10:47:35.806358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.185 [2024-07-25 10:47:35.806404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.185 [2024-07-25 10:47:35.817592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.185 [2024-07-25 10:47:35.817637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.185 [2024-07-25 10:47:35.833445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.185 [2024-07-25 10:47:35.833489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.185 [2024-07-25 10:47:35.850800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.185 [2024-07-25 10:47:35.850844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.185 [2024-07-25 10:47:35.867149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.185 [2024-07-25 10:47:35.867193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.185 [2024-07-25 10:47:35.883367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.185 [2024-07-25 10:47:35.883412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.185 [2024-07-25 10:47:35.900166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.185 [2024-07-25 10:47:35.900210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.185 [2024-07-25 10:47:35.919556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.185 [2024-07-25 10:47:35.919601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.443 [2024-07-25 10:47:35.934059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.443 [2024-07-25 10:47:35.934110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.444 [2024-07-25 10:47:35.950275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.444 [2024-07-25 10:47:35.950305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.444 [2024-07-25 10:47:35.966108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.444 [2024-07-25 10:47:35.966140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.444 [2024-07-25 10:47:35.982581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.444 [2024-07-25 10:47:35.982625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.444 [2024-07-25 10:47:35.999884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.444 [2024-07-25 10:47:35.999956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.444 [2024-07-25 10:47:36.014843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.444 [2024-07-25 10:47:36.014914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.444 [2024-07-25 10:47:36.031504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.444 [2024-07-25 10:47:36.031548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.444 [2024-07-25 10:47:36.048202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.444 [2024-07-25 10:47:36.048243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.444 [2024-07-25 10:47:36.064973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.444 [2024-07-25 10:47:36.065018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.444 [2024-07-25 10:47:36.083483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.444 [2024-07-25 10:47:36.083549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.444 [2024-07-25 10:47:36.097947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.444 [2024-07-25 10:47:36.098011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.444 [2024-07-25 10:47:36.114327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.444 [2024-07-25 10:47:36.114382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.444 [2024-07-25 10:47:36.130501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.444 [2024-07-25 10:47:36.130549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.444 [2024-07-25 10:47:36.146454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.444 [2024-07-25 10:47:36.146502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.444 [2024-07-25 10:47:36.156399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.444 [2024-07-25 10:47:36.156453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.444 [2024-07-25 10:47:36.171088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.444 [2024-07-25 10:47:36.171134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.721 [2024-07-25 10:47:36.181272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.721 [2024-07-25 10:47:36.181317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.721 [2024-07-25 10:47:36.195972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.721 [2024-07-25 10:47:36.196027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.721 [2024-07-25 10:47:36.210853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.721 [2024-07-25 10:47:36.210914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.721 [2024-07-25 10:47:36.227824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.721 [2024-07-25 10:47:36.227884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.721 [2024-07-25 10:47:36.242524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.721 [2024-07-25 10:47:36.242605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.721 [2024-07-25 10:47:36.259847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.721 [2024-07-25 10:47:36.259919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.721 [2024-07-25 10:47:36.276908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.721 [2024-07-25 10:47:36.276993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.721 [2024-07-25 10:47:36.292793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.721 [2024-07-25 10:47:36.292826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.721 [2024-07-25 10:47:36.304226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.721 [2024-07-25 10:47:36.304298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.721 [2024-07-25 10:47:36.320303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.721 [2024-07-25 10:47:36.320370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.721 [2024-07-25 10:47:36.337420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.721 [2024-07-25 10:47:36.337488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.721 [2024-07-25 10:47:36.354799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.721 [2024-07-25 10:47:36.354854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.721 [2024-07-25 10:47:36.371781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.721 [2024-07-25 10:47:36.371852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.721 [2024-07-25 10:47:36.387306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.721 [2024-07-25 10:47:36.387370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.721 [2024-07-25 10:47:36.396317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.721 [2024-07-25 10:47:36.396373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.721 [2024-07-25 10:47:36.412362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.721 [2024-07-25 10:47:36.412427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.721 [2024-07-25 10:47:36.422194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.722 [2024-07-25 10:47:36.422245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.722 [2024-07-25 10:47:36.437325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.722 [2024-07-25 10:47:36.437370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.007 [2024-07-25 10:47:36.455396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.007 [2024-07-25 10:47:36.455451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.007 [2024-07-25 10:47:36.471362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.007 [2024-07-25 10:47:36.471426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.007 [2024-07-25 10:47:36.489052] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.007 [2024-07-25 10:47:36.489116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.007 [2024-07-25 10:47:36.504546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.007 [2024-07-25 10:47:36.504595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.007 [2024-07-25 10:47:36.514339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.007 [2024-07-25 10:47:36.514406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.007 [2024-07-25 10:47:36.529562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.007 [2024-07-25 10:47:36.529628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.007 [2024-07-25 10:47:36.546020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.007 [2024-07-25 10:47:36.546104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.007 [2024-07-25 10:47:36.564575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.007 [2024-07-25 10:47:36.564645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.007 [2024-07-25 10:47:36.579828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.007 [2024-07-25 10:47:36.579921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.007 [2024-07-25 10:47:36.597971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.007 [2024-07-25 10:47:36.598042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.007 [2024-07-25 10:47:36.612962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.007 [2024-07-25 10:47:36.613021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.007 [2024-07-25 10:47:36.629640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.007 [2024-07-25 10:47:36.629706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.007 [2024-07-25 10:47:36.646928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.007 [2024-07-25 10:47:36.646983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.008 [2024-07-25 10:47:36.663577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.008 [2024-07-25 10:47:36.663640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.008 [2024-07-25 10:47:36.678482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.008 [2024-07-25 10:47:36.678560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.008 [2024-07-25 10:47:36.694442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.008 [2024-07-25 10:47:36.694511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.008 [2024-07-25 10:47:36.712318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.008 [2024-07-25 10:47:36.712366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.008 [2024-07-25 10:47:36.727910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.008 [2024-07-25 10:47:36.728020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.268 [2024-07-25 10:47:36.744953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.268 [2024-07-25 10:47:36.745037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.268 [2024-07-25 10:47:36.762793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.268 [2024-07-25 10:47:36.762856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.268 [2024-07-25 10:47:36.777631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.268 [2024-07-25 10:47:36.777679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.268 [2024-07-25 10:47:36.792978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.268 [2024-07-25 10:47:36.793041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.268 [2024-07-25 10:47:36.810805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.268 [2024-07-25 10:47:36.810900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.268 [2024-07-25 10:47:36.826118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.268 [2024-07-25 10:47:36.826180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.268 [2024-07-25 10:47:36.835980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.268 [2024-07-25 10:47:36.836026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.268 [2024-07-25 10:47:36.851047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.268 [2024-07-25 10:47:36.851092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.268 [2024-07-25 10:47:36.865545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.268 [2024-07-25 10:47:36.865590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.268 [2024-07-25 10:47:36.880707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.268 [2024-07-25 10:47:36.880769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.268 [2024-07-25 10:47:36.897309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.268 [2024-07-25 10:47:36.897354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.268 [2024-07-25 10:47:36.908756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.268 [2024-07-25 10:47:36.908801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.268 [2024-07-25 10:47:36.925581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.268 [2024-07-25 10:47:36.925624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.268 [2024-07-25 10:47:36.941332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.268 [2024-07-25 10:47:36.941377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.268 [2024-07-25 10:47:36.957620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.268 [2024-07-25 10:47:36.957663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.268 [2024-07-25 10:47:36.975398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.268 [2024-07-25 10:47:36.975442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.268 [2024-07-25 10:47:36.990177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.268 [2024-07-25 10:47:36.990222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.268 00:09:07.268 Latency(us) 00:09:07.268 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:07.268 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:07.268 Nvme1n1 : 5.01 12157.58 94.98 0.00 0.00 10515.20 4200.26 18707.55 00:09:07.268 =================================================================================================================== 00:09:07.268 Total : 12157.58 94.98 0.00 0.00 10515.20 4200.26 18707.55 00:09:07.268 [2024-07-25 10:47:37.000472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.268 [2024-07-25 10:47:37.000501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.527 [2024-07-25 10:47:37.012495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.527 [2024-07-25 10:47:37.012521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.527 [2024-07-25 10:47:37.024467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.527 [2024-07-25 10:47:37.024506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.527 [2024-07-25 10:47:37.036462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.527 [2024-07-25 10:47:37.036485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.527 [2024-07-25 10:47:37.048486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.527 [2024-07-25 10:47:37.048525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.527 [2024-07-25 10:47:37.060483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.527 [2024-07-25 10:47:37.060506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.527 [2024-07-25 10:47:37.072486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.527 [2024-07-25 10:47:37.072508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.527 [2024-07-25 10:47:37.084488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.527 [2024-07-25 10:47:37.084510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.527 [2024-07-25 10:47:37.096493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.527 [2024-07-25 10:47:37.096516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.527 [2024-07-25 10:47:37.108495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.527 [2024-07-25 10:47:37.108519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.527 [2024-07-25 10:47:37.120500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.527 [2024-07-25 10:47:37.120537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.527 [2024-07-25 10:47:37.132518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.527 [2024-07-25 10:47:37.132551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.527 [2024-07-25 10:47:37.144519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.527 [2024-07-25 10:47:37.144544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.527 [2024-07-25 10:47:37.156520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.527 [2024-07-25 10:47:37.156543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.527 [2024-07-25 10:47:37.168517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.527 [2024-07-25 10:47:37.168539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.527 [2024-07-25 10:47:37.180521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.527 [2024-07-25 10:47:37.180542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.527 [2024-07-25 10:47:37.192528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.527 [2024-07-25 10:47:37.192552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.527 [2024-07-25 10:47:37.204530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.527 [2024-07-25 10:47:37.204553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.527 [2024-07-25 10:47:37.216530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.527 [2024-07-25 10:47:37.216552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.527 [2024-07-25 10:47:37.228533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.527 [2024-07-25 10:47:37.228566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.527 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (67057) - No such process 00:09:07.527 10:47:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 67057 00:09:07.527 10:47:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:07.527 10:47:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.527 10:47:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:07.527 10:47:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.527 10:47:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:07.527 10:47:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.527 10:47:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:07.527 delay0 00:09:07.527 10:47:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.527 10:47:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:07.527 10:47:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.527 10:47:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:07.527 10:47:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.527 10:47:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:07.786 [2024-07-25 10:47:37.427336] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:14.350 Initializing NVMe Controllers 00:09:14.350 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:14.350 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:14.350 Initialization complete. Launching workers. 00:09:14.350 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 79 00:09:14.350 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 366, failed to submit 33 00:09:14.350 success 232, unsuccess 134, failed 0 00:09:14.350 10:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:14.350 10:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:14.350 10:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:14.350 10:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:09:14.350 10:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:14.350 10:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:09:14.350 10:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:14.350 10:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:14.350 rmmod nvme_tcp 00:09:14.350 rmmod nvme_fabrics 00:09:14.350 rmmod nvme_keyring 00:09:14.350 10:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:14.350 10:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:09:14.350 10:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:09:14.350 10:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 66903 ']' 00:09:14.350 10:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 66903 00:09:14.350 10:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 66903 ']' 00:09:14.350 10:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 66903 00:09:14.350 10:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:09:14.350 10:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:14.350 10:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66903 00:09:14.350 10:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:14.350 10:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:14.350 killing process with pid 66903 00:09:14.350 10:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66903' 00:09:14.350 10:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 66903 00:09:14.350 10:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 66903 00:09:14.350 10:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:14.350 10:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:14.350 10:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:14.350 10:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:14.350 10:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:14.350 10:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.351 10:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:14.351 10:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.351 10:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:14.351 00:09:14.351 real 0m24.850s 00:09:14.351 user 0m39.890s 00:09:14.351 sys 0m7.554s 00:09:14.351 10:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:14.351 10:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:14.351 ************************************ 00:09:14.351 END TEST nvmf_zcopy 00:09:14.351 ************************************ 00:09:14.351 10:47:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:14.351 10:47:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:14.351 10:47:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:14.351 10:47:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:14.351 ************************************ 00:09:14.351 START TEST nvmf_nmic 00:09:14.351 ************************************ 00:09:14.351 10:47:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:14.351 * Looking for test storage... 00:09:14.351 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=bb4b8bd3-cfb4-4368-bf29-91254747069c 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:14.351 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:14.611 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:14.611 Cannot find device "nvmf_tgt_br" 00:09:14.611 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:09:14.611 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:14.611 Cannot find device "nvmf_tgt_br2" 00:09:14.611 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:09:14.611 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:14.611 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:14.611 Cannot find device "nvmf_tgt_br" 00:09:14.611 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:09:14.611 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:14.611 Cannot find device "nvmf_tgt_br2" 00:09:14.611 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:09:14.611 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:14.611 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:14.611 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:14.611 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:14.611 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:09:14.611 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:14.611 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:14.611 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:09:14.611 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:14.612 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:14.612 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:14.612 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:14.612 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:14.612 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:14.612 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:14.612 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:14.612 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:14.612 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:14.612 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:14.612 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:14.612 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:14.612 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:14.612 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:14.612 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:14.612 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:14.612 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:14.612 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:14.612 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:14.871 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:14.871 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:14.871 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:14.871 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:14.871 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:14.871 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:09:14.871 00:09:14.871 --- 10.0.0.2 ping statistics --- 00:09:14.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.871 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:09:14.871 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:14.871 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:14.871 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:09:14.871 00:09:14.871 --- 10.0.0.3 ping statistics --- 00:09:14.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.871 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:09:14.871 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:14.871 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:14.871 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:09:14.871 00:09:14.871 --- 10.0.0.1 ping statistics --- 00:09:14.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.871 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:09:14.871 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:14.871 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:09:14.871 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:14.871 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:14.871 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:14.871 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:14.871 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:14.871 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:14.871 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:14.871 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:14.871 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:14.871 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:14.871 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.871 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=67380 00:09:14.871 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:14.871 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 67380 00:09:14.871 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 67380 ']' 00:09:14.871 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.871 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:14.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.871 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.871 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:14.871 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.871 [2024-07-25 10:47:44.482184] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:14.871 [2024-07-25 10:47:44.482309] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:15.130 [2024-07-25 10:47:44.630845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:15.130 [2024-07-25 10:47:44.734821] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:15.130 [2024-07-25 10:47:44.734928] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:15.130 [2024-07-25 10:47:44.734940] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:15.130 [2024-07-25 10:47:44.734949] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:15.130 [2024-07-25 10:47:44.734957] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:15.130 [2024-07-25 10:47:44.735135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.130 [2024-07-25 10:47:44.735338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:15.130 [2024-07-25 10:47:44.735867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.130 [2024-07-25 10:47:44.735877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:15.130 [2024-07-25 10:47:44.790334] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:16.072 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.073 [2024-07-25 10:47:45.487170] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.073 Malloc0 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.073 [2024-07-25 10:47:45.573123] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:16.073 test case1: single bdev can't be used in multiple subsystems 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.073 [2024-07-25 10:47:45.596931] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:16.073 [2024-07-25 10:47:45.596968] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:16.073 [2024-07-25 10:47:45.596980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.073 request: 00:09:16.073 { 00:09:16.073 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:16.073 "namespace": { 00:09:16.073 "bdev_name": "Malloc0", 00:09:16.073 "no_auto_visible": false 00:09:16.073 }, 00:09:16.073 "method": "nvmf_subsystem_add_ns", 00:09:16.073 "req_id": 1 00:09:16.073 } 00:09:16.073 Got JSON-RPC error response 00:09:16.073 response: 00:09:16.073 { 00:09:16.073 "code": -32602, 00:09:16.073 "message": "Invalid parameters" 00:09:16.073 } 00:09:16.073 Adding namespace failed - expected result. 00:09:16.073 test case2: host connect to nvmf target in multiple paths 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.073 [2024-07-25 10:47:45.609044] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid=bb4b8bd3-cfb4-4368-bf29-91254747069c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:16.073 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid=bb4b8bd3-cfb4-4368-bf29-91254747069c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:16.331 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:16.331 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:16.331 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:16.331 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:16.331 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:09:18.233 10:47:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:18.233 10:47:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:18.233 10:47:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:18.233 10:47:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:18.233 10:47:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:18.233 10:47:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:09:18.234 10:47:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:18.234 [global] 00:09:18.234 thread=1 00:09:18.234 invalidate=1 00:09:18.234 rw=write 00:09:18.234 time_based=1 00:09:18.234 runtime=1 00:09:18.234 ioengine=libaio 00:09:18.234 direct=1 00:09:18.234 bs=4096 00:09:18.234 iodepth=1 00:09:18.234 norandommap=0 00:09:18.234 numjobs=1 00:09:18.234 00:09:18.234 verify_dump=1 00:09:18.234 verify_backlog=512 00:09:18.234 verify_state_save=0 00:09:18.234 do_verify=1 00:09:18.234 verify=crc32c-intel 00:09:18.234 [job0] 00:09:18.234 filename=/dev/nvme0n1 00:09:18.234 Could not set queue depth (nvme0n1) 00:09:18.492 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:18.492 fio-3.35 00:09:18.492 Starting 1 thread 00:09:19.867 00:09:19.867 job0: (groupid=0, jobs=1): err= 0: pid=67477: Thu Jul 25 10:47:49 2024 00:09:19.867 read: IOPS=2936, BW=11.5MiB/s (12.0MB/s)(11.5MiB/1001msec) 00:09:19.867 slat (nsec): min=12217, max=37246, avg=14258.80, stdev=2354.61 00:09:19.867 clat (usec): min=145, max=786, avg=184.32, stdev=20.25 00:09:19.867 lat (usec): min=161, max=799, avg=198.58, stdev=20.39 00:09:19.867 clat percentiles (usec): 00:09:19.867 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 172], 00:09:19.867 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 188], 00:09:19.867 | 70.00th=[ 192], 80.00th=[ 196], 90.00th=[ 204], 95.00th=[ 210], 00:09:19.867 | 99.00th=[ 231], 99.50th=[ 265], 99.90th=[ 318], 99.95th=[ 355], 00:09:19.867 | 99.99th=[ 791] 00:09:19.867 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:19.867 slat (usec): min=17, max=104, avg=20.56, stdev= 4.33 00:09:19.867 clat (usec): min=85, max=688, avg=111.81, stdev=16.39 00:09:19.867 lat (usec): min=107, max=708, avg=132.37, stdev=17.50 00:09:19.867 clat percentiles (usec): 00:09:19.867 | 1.00th=[ 92], 5.00th=[ 96], 10.00th=[ 99], 20.00th=[ 102], 00:09:19.867 | 30.00th=[ 105], 40.00th=[ 109], 50.00th=[ 111], 60.00th=[ 114], 00:09:19.867 | 70.00th=[ 116], 80.00th=[ 120], 90.00th=[ 126], 95.00th=[ 133], 00:09:19.867 | 99.00th=[ 151], 99.50th=[ 161], 99.90th=[ 221], 99.95th=[ 322], 00:09:19.867 | 99.99th=[ 693] 00:09:19.867 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:09:19.867 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:19.867 lat (usec) : 100=7.17%, 250=92.50%, 500=0.30%, 750=0.02%, 1000=0.02% 00:09:19.867 cpu : usr=1.90%, sys=8.40%, ctx=6011, majf=0, minf=2 00:09:19.867 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:19.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.867 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.867 issued rwts: total=2939,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:19.867 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:19.867 00:09:19.867 Run status group 0 (all jobs): 00:09:19.867 READ: bw=11.5MiB/s (12.0MB/s), 11.5MiB/s-11.5MiB/s (12.0MB/s-12.0MB/s), io=11.5MiB (12.0MB), run=1001-1001msec 00:09:19.867 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:09:19.867 00:09:19.867 Disk stats (read/write): 00:09:19.867 nvme0n1: ios=2610/2886, merge=0/0, ticks=509/347, in_queue=856, util=91.48% 00:09:19.867 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:19.867 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:19.867 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:19.867 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:09:19.868 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:19.868 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:19.868 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:19.868 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:19.868 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:09:19.868 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:19.868 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:19.868 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:19.868 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:09:19.868 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:19.868 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:09:19.868 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:19.868 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:19.868 rmmod nvme_tcp 00:09:19.868 rmmod nvme_fabrics 00:09:19.868 rmmod nvme_keyring 00:09:19.868 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:19.868 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:09:19.868 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:09:19.868 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 67380 ']' 00:09:19.868 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 67380 00:09:19.868 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 67380 ']' 00:09:19.868 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 67380 00:09:19.868 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:09:19.868 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:19.868 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67380 00:09:19.868 killing process with pid 67380 00:09:19.868 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:19.868 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:19.868 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67380' 00:09:19.868 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 67380 00:09:19.868 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 67380 00:09:20.126 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:20.126 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:20.126 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:20.126 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:20.126 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:20.126 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.126 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:20.126 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.126 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:20.126 00:09:20.126 real 0m5.720s 00:09:20.126 user 0m18.315s 00:09:20.126 sys 0m2.100s 00:09:20.126 ************************************ 00:09:20.126 END TEST nvmf_nmic 00:09:20.126 ************************************ 00:09:20.126 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:20.126 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:20.126 10:47:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:20.126 10:47:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:20.126 10:47:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:20.126 10:47:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:20.126 ************************************ 00:09:20.126 START TEST nvmf_fio_target 00:09:20.126 ************************************ 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:20.127 * Looking for test storage... 00:09:20.127 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=bb4b8bd3-cfb4-4368-bf29-91254747069c 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:20.127 Cannot find device "nvmf_tgt_br" 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:09:20.127 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:20.386 Cannot find device "nvmf_tgt_br2" 00:09:20.386 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:09:20.386 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:20.386 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:20.386 Cannot find device "nvmf_tgt_br" 00:09:20.386 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:09:20.386 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:20.386 Cannot find device "nvmf_tgt_br2" 00:09:20.386 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:09:20.386 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:20.386 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:20.386 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:20.386 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:20.386 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:09:20.386 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:20.386 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:20.386 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:09:20.386 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:20.386 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:20.386 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:20.386 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:20.386 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:20.386 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:20.386 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:20.386 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:20.386 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:20.386 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:20.386 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:20.386 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:20.386 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:20.386 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:20.386 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:20.386 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:20.386 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:20.386 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:20.386 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:20.386 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:20.645 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:20.645 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:20.645 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:20.645 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:20.645 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:20.645 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:09:20.645 00:09:20.645 --- 10.0.0.2 ping statistics --- 00:09:20.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.645 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:09:20.645 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:20.645 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:20.645 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:09:20.645 00:09:20.645 --- 10.0.0.3 ping statistics --- 00:09:20.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.645 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:09:20.645 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:20.645 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:20.645 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:09:20.645 00:09:20.645 --- 10.0.0.1 ping statistics --- 00:09:20.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.645 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:09:20.645 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:20.645 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:09:20.645 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:20.645 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:20.645 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:20.645 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:20.645 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:20.645 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:20.645 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:20.645 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:20.645 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:20.645 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:20.645 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:20.645 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=67654 00:09:20.645 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 67654 00:09:20.645 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 67654 ']' 00:09:20.645 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:20.645 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.645 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:20.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.645 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.645 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:20.645 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:20.645 [2024-07-25 10:47:50.251556] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:20.645 [2024-07-25 10:47:50.251661] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:20.903 [2024-07-25 10:47:50.391903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:20.903 [2024-07-25 10:47:50.547162] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:20.903 [2024-07-25 10:47:50.547222] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:20.903 [2024-07-25 10:47:50.547236] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:20.903 [2024-07-25 10:47:50.547247] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:20.903 [2024-07-25 10:47:50.547256] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:20.903 [2024-07-25 10:47:50.547665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.903 [2024-07-25 10:47:50.547773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:20.903 [2024-07-25 10:47:50.547985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:20.903 [2024-07-25 10:47:50.547992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.903 [2024-07-25 10:47:50.620287] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:21.838 10:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:21.838 10:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:09:21.838 10:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:21.838 10:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:21.838 10:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:21.838 10:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:21.838 10:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:21.838 [2024-07-25 10:47:51.547591] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:22.096 10:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:22.354 10:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:22.354 10:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:22.612 10:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:22.612 10:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:22.870 10:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:22.870 10:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:23.128 10:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:23.128 10:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:23.385 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:23.644 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:23.644 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:23.901 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:23.901 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:24.159 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:24.159 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:24.436 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:24.694 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:24.694 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:24.952 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:24.952 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:25.211 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:25.469 [2024-07-25 10:47:55.100189] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:25.469 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:25.728 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:25.988 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid=bb4b8bd3-cfb4-4368-bf29-91254747069c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:26.247 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:26.247 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:09:26.247 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:26.247 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:09:26.247 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:09:26.247 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:09:28.153 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:28.153 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:28.153 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:28.153 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:09:28.153 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:28.153 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:09:28.153 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:28.153 [global] 00:09:28.153 thread=1 00:09:28.153 invalidate=1 00:09:28.153 rw=write 00:09:28.153 time_based=1 00:09:28.153 runtime=1 00:09:28.153 ioengine=libaio 00:09:28.153 direct=1 00:09:28.153 bs=4096 00:09:28.153 iodepth=1 00:09:28.153 norandommap=0 00:09:28.153 numjobs=1 00:09:28.153 00:09:28.153 verify_dump=1 00:09:28.153 verify_backlog=512 00:09:28.153 verify_state_save=0 00:09:28.153 do_verify=1 00:09:28.153 verify=crc32c-intel 00:09:28.153 [job0] 00:09:28.153 filename=/dev/nvme0n1 00:09:28.153 [job1] 00:09:28.153 filename=/dev/nvme0n2 00:09:28.153 [job2] 00:09:28.153 filename=/dev/nvme0n3 00:09:28.153 [job3] 00:09:28.153 filename=/dev/nvme0n4 00:09:28.413 Could not set queue depth (nvme0n1) 00:09:28.413 Could not set queue depth (nvme0n2) 00:09:28.413 Could not set queue depth (nvme0n3) 00:09:28.413 Could not set queue depth (nvme0n4) 00:09:28.413 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:28.413 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:28.413 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:28.413 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:28.413 fio-3.35 00:09:28.413 Starting 4 threads 00:09:29.790 00:09:29.790 job0: (groupid=0, jobs=1): err= 0: pid=67839: Thu Jul 25 10:47:59 2024 00:09:29.790 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:29.790 slat (nsec): min=12402, max=48449, avg=15973.21, stdev=3054.91 00:09:29.790 clat (usec): min=141, max=2128, avg=198.20, stdev=49.08 00:09:29.790 lat (usec): min=155, max=2145, avg=214.18, stdev=49.28 00:09:29.790 clat percentiles (usec): 00:09:29.790 | 1.00th=[ 153], 5.00th=[ 163], 10.00th=[ 169], 20.00th=[ 178], 00:09:29.790 | 30.00th=[ 184], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 200], 00:09:29.790 | 70.00th=[ 206], 80.00th=[ 215], 90.00th=[ 227], 95.00th=[ 239], 00:09:29.790 | 99.00th=[ 281], 99.50th=[ 334], 99.90th=[ 586], 99.95th=[ 725], 00:09:29.790 | 99.99th=[ 2114] 00:09:29.790 write: IOPS=2665, BW=10.4MiB/s (10.9MB/s)(10.4MiB/1001msec); 0 zone resets 00:09:29.790 slat (usec): min=14, max=145, avg=22.85, stdev= 5.53 00:09:29.790 clat (usec): min=92, max=316, avg=142.90, stdev=23.47 00:09:29.790 lat (usec): min=111, max=435, avg=165.75, stdev=24.97 00:09:29.790 clat percentiles (usec): 00:09:29.790 | 1.00th=[ 101], 5.00th=[ 111], 10.00th=[ 116], 20.00th=[ 124], 00:09:29.790 | 30.00th=[ 130], 40.00th=[ 137], 50.00th=[ 141], 60.00th=[ 147], 00:09:29.790 | 70.00th=[ 153], 80.00th=[ 161], 90.00th=[ 174], 95.00th=[ 184], 00:09:29.790 | 99.00th=[ 212], 99.50th=[ 223], 99.90th=[ 273], 99.95th=[ 289], 00:09:29.790 | 99.99th=[ 318] 00:09:29.790 bw ( KiB/s): min=12288, max=12288, per=39.39%, avg=12288.00, stdev= 0.00, samples=1 00:09:29.790 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:29.790 lat (usec) : 100=0.42%, 250=98.05%, 500=1.43%, 750=0.08% 00:09:29.790 lat (msec) : 4=0.02% 00:09:29.790 cpu : usr=2.40%, sys=7.70%, ctx=5235, majf=0, minf=11 00:09:29.790 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:29.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.790 issued rwts: total=2560,2668,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.790 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:29.790 job1: (groupid=0, jobs=1): err= 0: pid=67840: Thu Jul 25 10:47:59 2024 00:09:29.791 read: IOPS=1433, BW=5734KiB/s (5872kB/s)(5740KiB/1001msec) 00:09:29.791 slat (nsec): min=16410, max=80523, avg=23440.82, stdev=6725.34 00:09:29.791 clat (usec): min=184, max=2784, avg=349.90, stdev=108.94 00:09:29.791 lat (usec): min=215, max=2813, avg=373.34, stdev=110.94 00:09:29.791 clat percentiles (usec): 00:09:29.791 | 1.00th=[ 262], 5.00th=[ 281], 10.00th=[ 289], 20.00th=[ 302], 00:09:29.791 | 30.00th=[ 314], 40.00th=[ 322], 50.00th=[ 334], 60.00th=[ 343], 00:09:29.791 | 70.00th=[ 355], 80.00th=[ 371], 90.00th=[ 429], 95.00th=[ 502], 00:09:29.791 | 99.00th=[ 627], 99.50th=[ 693], 99.90th=[ 2409], 99.95th=[ 2769], 00:09:29.791 | 99.99th=[ 2769] 00:09:29.791 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:29.791 slat (usec): min=23, max=127, avg=35.66, stdev= 9.55 00:09:29.791 clat (usec): min=109, max=612, avg=261.42, stdev=84.09 00:09:29.791 lat (usec): min=134, max=739, avg=297.08, stdev=89.79 00:09:29.791 clat percentiles (usec): 00:09:29.791 | 1.00th=[ 125], 5.00th=[ 143], 10.00th=[ 161], 20.00th=[ 204], 00:09:29.791 | 30.00th=[ 225], 40.00th=[ 237], 50.00th=[ 249], 60.00th=[ 260], 00:09:29.791 | 70.00th=[ 277], 80.00th=[ 302], 90.00th=[ 396], 95.00th=[ 441], 00:09:29.791 | 99.00th=[ 515], 99.50th=[ 529], 99.90th=[ 586], 99.95th=[ 611], 00:09:29.791 | 99.99th=[ 611] 00:09:29.791 bw ( KiB/s): min= 8024, max= 8024, per=25.72%, avg=8024.00, stdev= 0.00, samples=1 00:09:29.791 iops : min= 2006, max= 2006, avg=2006.00, stdev= 0.00, samples=1 00:09:29.791 lat (usec) : 250=26.76%, 500=70.08%, 750=3.00%, 1000=0.10% 00:09:29.791 lat (msec) : 4=0.07% 00:09:29.791 cpu : usr=2.50%, sys=6.30%, ctx=2971, majf=0, minf=13 00:09:29.791 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:29.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.791 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.791 issued rwts: total=1435,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.791 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:29.791 job2: (groupid=0, jobs=1): err= 0: pid=67841: Thu Jul 25 10:47:59 2024 00:09:29.791 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:29.791 slat (nsec): min=16713, max=84242, avg=26254.66, stdev=8271.74 00:09:29.791 clat (usec): min=189, max=876, avg=347.83, stdev=83.29 00:09:29.791 lat (usec): min=207, max=898, avg=374.09, stdev=86.94 00:09:29.791 clat percentiles (usec): 00:09:29.791 | 1.00th=[ 235], 5.00th=[ 269], 10.00th=[ 281], 20.00th=[ 297], 00:09:29.791 | 30.00th=[ 306], 40.00th=[ 318], 50.00th=[ 330], 60.00th=[ 343], 00:09:29.791 | 70.00th=[ 355], 80.00th=[ 371], 90.00th=[ 433], 95.00th=[ 553], 00:09:29.791 | 99.00th=[ 660], 99.50th=[ 701], 99.90th=[ 783], 99.95th=[ 873], 00:09:29.791 | 99.99th=[ 873] 00:09:29.791 write: IOPS=1552, BW=6210KiB/s (6359kB/s)(6216KiB/1001msec); 0 zone resets 00:09:29.791 slat (usec): min=19, max=146, avg=32.65, stdev= 6.83 00:09:29.791 clat (usec): min=114, max=477, avg=235.11, stdev=50.14 00:09:29.791 lat (usec): min=137, max=511, avg=267.76, stdev=51.32 00:09:29.791 clat percentiles (usec): 00:09:29.791 | 1.00th=[ 133], 5.00th=[ 149], 10.00th=[ 163], 20.00th=[ 192], 00:09:29.791 | 30.00th=[ 215], 40.00th=[ 229], 50.00th=[ 239], 60.00th=[ 249], 00:09:29.791 | 70.00th=[ 260], 80.00th=[ 273], 90.00th=[ 289], 95.00th=[ 318], 00:09:29.791 | 99.00th=[ 367], 99.50th=[ 379], 99.90th=[ 420], 99.95th=[ 478], 00:09:29.791 | 99.99th=[ 478] 00:09:29.791 bw ( KiB/s): min= 8192, max= 8192, per=26.26%, avg=8192.00, stdev= 0.00, samples=1 00:09:29.791 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:29.791 lat (usec) : 250=31.26%, 500=64.95%, 750=3.69%, 1000=0.10% 00:09:29.791 cpu : usr=2.70%, sys=6.60%, ctx=3090, majf=0, minf=3 00:09:29.791 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:29.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.791 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.791 issued rwts: total=1536,1554,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.791 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:29.791 job3: (groupid=0, jobs=1): err= 0: pid=67842: Thu Jul 25 10:47:59 2024 00:09:29.791 read: IOPS=1795, BW=7181KiB/s (7353kB/s)(7188KiB/1001msec) 00:09:29.791 slat (nsec): min=12166, max=74534, avg=18512.85, stdev=6699.53 00:09:29.791 clat (usec): min=203, max=918, avg=272.86, stdev=35.42 00:09:29.791 lat (usec): min=220, max=956, avg=291.37, stdev=37.28 00:09:29.791 clat percentiles (usec): 00:09:29.791 | 1.00th=[ 221], 5.00th=[ 231], 10.00th=[ 239], 20.00th=[ 247], 00:09:29.791 | 30.00th=[ 253], 40.00th=[ 260], 50.00th=[ 269], 60.00th=[ 277], 00:09:29.791 | 70.00th=[ 285], 80.00th=[ 297], 90.00th=[ 314], 95.00th=[ 330], 00:09:29.791 | 99.00th=[ 375], 99.50th=[ 388], 99.90th=[ 502], 99.95th=[ 922], 00:09:29.791 | 99.99th=[ 922] 00:09:29.791 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:29.791 slat (usec): min=16, max=126, avg=26.84, stdev=11.04 00:09:29.791 clat (usec): min=139, max=1363, avg=201.95, stdev=37.04 00:09:29.791 lat (usec): min=164, max=1414, avg=228.79, stdev=40.97 00:09:29.791 clat percentiles (usec): 00:09:29.791 | 1.00th=[ 155], 5.00th=[ 165], 10.00th=[ 172], 20.00th=[ 180], 00:09:29.791 | 30.00th=[ 186], 40.00th=[ 192], 50.00th=[ 198], 60.00th=[ 204], 00:09:29.791 | 70.00th=[ 212], 80.00th=[ 221], 90.00th=[ 239], 95.00th=[ 251], 00:09:29.791 | 99.00th=[ 281], 99.50th=[ 293], 99.90th=[ 338], 99.95th=[ 351], 00:09:29.791 | 99.99th=[ 1369] 00:09:29.791 bw ( KiB/s): min= 8192, max= 8192, per=26.26%, avg=8192.00, stdev= 0.00, samples=1 00:09:29.791 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:29.791 lat (usec) : 250=62.34%, 500=37.61%, 1000=0.03% 00:09:29.791 lat (msec) : 2=0.03% 00:09:29.791 cpu : usr=1.40%, sys=7.20%, ctx=3845, majf=0, minf=8 00:09:29.791 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:29.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.791 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.791 issued rwts: total=1797,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.791 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:29.791 00:09:29.791 Run status group 0 (all jobs): 00:09:29.791 READ: bw=28.6MiB/s (30.0MB/s), 5734KiB/s-9.99MiB/s (5872kB/s-10.5MB/s), io=28.6MiB (30.0MB), run=1001-1001msec 00:09:29.791 WRITE: bw=30.5MiB/s (31.9MB/s), 6138KiB/s-10.4MiB/s (6285kB/s-10.9MB/s), io=30.5MiB (32.0MB), run=1001-1001msec 00:09:29.791 00:09:29.791 Disk stats (read/write): 00:09:29.791 nvme0n1: ios=2098/2523, merge=0/0, ticks=440/381, in_queue=821, util=88.48% 00:09:29.791 nvme0n2: ios=1111/1536, merge=0/0, ticks=420/429, in_queue=849, util=88.88% 00:09:29.791 nvme0n3: ios=1177/1536, merge=0/0, ticks=433/375, in_queue=808, util=89.20% 00:09:29.791 nvme0n4: ios=1536/1790, merge=0/0, ticks=423/393, in_queue=816, util=89.83% 00:09:29.791 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:29.791 [global] 00:09:29.791 thread=1 00:09:29.791 invalidate=1 00:09:29.791 rw=randwrite 00:09:29.791 time_based=1 00:09:29.791 runtime=1 00:09:29.791 ioengine=libaio 00:09:29.791 direct=1 00:09:29.791 bs=4096 00:09:29.791 iodepth=1 00:09:29.791 norandommap=0 00:09:29.791 numjobs=1 00:09:29.791 00:09:29.791 verify_dump=1 00:09:29.791 verify_backlog=512 00:09:29.791 verify_state_save=0 00:09:29.791 do_verify=1 00:09:29.791 verify=crc32c-intel 00:09:29.791 [job0] 00:09:29.791 filename=/dev/nvme0n1 00:09:29.791 [job1] 00:09:29.791 filename=/dev/nvme0n2 00:09:29.791 [job2] 00:09:29.791 filename=/dev/nvme0n3 00:09:29.791 [job3] 00:09:29.791 filename=/dev/nvme0n4 00:09:29.791 Could not set queue depth (nvme0n1) 00:09:29.791 Could not set queue depth (nvme0n2) 00:09:29.791 Could not set queue depth (nvme0n3) 00:09:29.791 Could not set queue depth (nvme0n4) 00:09:29.791 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:29.791 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:29.791 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:29.791 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:29.791 fio-3.35 00:09:29.791 Starting 4 threads 00:09:31.167 00:09:31.167 job0: (groupid=0, jobs=1): err= 0: pid=67901: Thu Jul 25 10:48:00 2024 00:09:31.167 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:09:31.167 slat (nsec): min=11910, max=70250, avg=14200.92, stdev=2467.91 00:09:31.167 clat (usec): min=131, max=2377, avg=159.98, stdev=46.18 00:09:31.167 lat (usec): min=145, max=2393, avg=174.19, stdev=46.55 00:09:31.167 clat percentiles (usec): 00:09:31.167 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 149], 00:09:31.167 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 159], 00:09:31.167 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 174], 95.00th=[ 180], 00:09:31.167 | 99.00th=[ 204], 99.50th=[ 260], 99.90th=[ 392], 99.95th=[ 1045], 00:09:31.167 | 99.99th=[ 2376] 00:09:31.167 write: IOPS=3353, BW=13.1MiB/s (13.7MB/s)(13.1MiB/1001msec); 0 zone resets 00:09:31.167 slat (usec): min=14, max=108, avg=20.38, stdev= 3.47 00:09:31.167 clat (usec): min=87, max=308, avg=115.02, stdev=12.72 00:09:31.167 lat (usec): min=106, max=340, avg=135.41, stdev=13.67 00:09:31.167 clat percentiles (usec): 00:09:31.167 | 1.00th=[ 94], 5.00th=[ 98], 10.00th=[ 101], 20.00th=[ 105], 00:09:31.167 | 30.00th=[ 109], 40.00th=[ 112], 50.00th=[ 115], 60.00th=[ 118], 00:09:31.167 | 70.00th=[ 121], 80.00th=[ 125], 90.00th=[ 130], 95.00th=[ 135], 00:09:31.167 | 99.00th=[ 147], 99.50th=[ 153], 99.90th=[ 225], 99.95th=[ 243], 00:09:31.167 | 99.99th=[ 310] 00:09:31.167 bw ( KiB/s): min=13416, max=13416, per=32.36%, avg=13416.00, stdev= 0.00, samples=1 00:09:31.167 iops : min= 3354, max= 3354, avg=3354.00, stdev= 0.00, samples=1 00:09:31.167 lat (usec) : 100=4.60%, 250=95.12%, 500=0.23%, 750=0.02% 00:09:31.167 lat (msec) : 2=0.02%, 4=0.02% 00:09:31.167 cpu : usr=2.50%, sys=8.50%, ctx=6434, majf=0, minf=11 00:09:31.167 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:31.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.167 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.167 issued rwts: total=3072,3357,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.168 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:31.168 job1: (groupid=0, jobs=1): err= 0: pid=67902: Thu Jul 25 10:48:00 2024 00:09:31.168 read: IOPS=2038, BW=8156KiB/s (8352kB/s)(8164KiB/1001msec) 00:09:31.168 slat (nsec): min=8997, max=55652, avg=14384.50, stdev=4831.59 00:09:31.168 clat (usec): min=182, max=1696, avg=254.80, stdev=36.63 00:09:31.168 lat (usec): min=197, max=1709, avg=269.19, stdev=37.30 00:09:31.168 clat percentiles (usec): 00:09:31.168 | 1.00th=[ 225], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 241], 00:09:31.168 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 258], 00:09:31.168 | 70.00th=[ 262], 80.00th=[ 265], 90.00th=[ 273], 95.00th=[ 281], 00:09:31.168 | 99.00th=[ 302], 99.50th=[ 322], 99.90th=[ 404], 99.95th=[ 603], 00:09:31.168 | 99.99th=[ 1696] 00:09:31.168 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:31.168 slat (usec): min=9, max=131, avg=19.14, stdev= 7.57 00:09:31.168 clat (usec): min=102, max=462, avg=197.64, stdev=18.85 00:09:31.168 lat (usec): min=138, max=489, avg=216.78, stdev=21.12 00:09:31.168 clat percentiles (usec): 00:09:31.168 | 1.00th=[ 161], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 186], 00:09:31.168 | 30.00th=[ 190], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 200], 00:09:31.168 | 70.00th=[ 204], 80.00th=[ 210], 90.00th=[ 219], 95.00th=[ 227], 00:09:31.168 | 99.00th=[ 258], 99.50th=[ 277], 99.90th=[ 310], 99.95th=[ 326], 00:09:31.168 | 99.99th=[ 461] 00:09:31.168 bw ( KiB/s): min= 8192, max= 8192, per=19.76%, avg=8192.00, stdev= 0.00, samples=1 00:09:31.168 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:31.168 lat (usec) : 250=71.41%, 500=28.54%, 750=0.02% 00:09:31.168 lat (msec) : 2=0.02% 00:09:31.168 cpu : usr=1.70%, sys=5.90%, ctx=4098, majf=0, minf=9 00:09:31.168 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:31.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.168 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.168 issued rwts: total=2041,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.168 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:31.168 job2: (groupid=0, jobs=1): err= 0: pid=67903: Thu Jul 25 10:48:00 2024 00:09:31.168 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:31.168 slat (usec): min=13, max=1696, avg=19.44, stdev=33.52 00:09:31.168 clat (usec): min=3, max=3551, avg=185.43, stdev=77.21 00:09:31.168 lat (usec): min=163, max=3579, avg=204.87, stdev=83.37 00:09:31.168 clat percentiles (usec): 00:09:31.168 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 172], 00:09:31.168 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 184], 00:09:31.168 | 70.00th=[ 188], 80.00th=[ 192], 90.00th=[ 198], 95.00th=[ 204], 00:09:31.168 | 99.00th=[ 247], 99.50th=[ 424], 99.90th=[ 783], 99.95th=[ 1385], 00:09:31.168 | 99.99th=[ 3556] 00:09:31.168 write: IOPS=2918, BW=11.4MiB/s (12.0MB/s)(11.4MiB/1001msec); 0 zone resets 00:09:31.168 slat (usec): min=15, max=104, avg=24.88, stdev= 6.36 00:09:31.168 clat (usec): min=106, max=859, avg=134.01, stdev=22.10 00:09:31.168 lat (usec): min=130, max=882, avg=158.89, stdev=23.09 00:09:31.168 clat percentiles (usec): 00:09:31.168 | 1.00th=[ 114], 5.00th=[ 119], 10.00th=[ 121], 20.00th=[ 125], 00:09:31.168 | 30.00th=[ 127], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 135], 00:09:31.168 | 70.00th=[ 139], 80.00th=[ 143], 90.00th=[ 149], 95.00th=[ 153], 00:09:31.168 | 99.00th=[ 163], 99.50th=[ 169], 99.90th=[ 545], 99.95th=[ 627], 00:09:31.168 | 99.99th=[ 857] 00:09:31.168 bw ( KiB/s): min=12288, max=12288, per=29.64%, avg=12288.00, stdev= 0.00, samples=1 00:09:31.168 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:31.168 lat (usec) : 4=0.02%, 250=99.40%, 500=0.35%, 750=0.16%, 1000=0.04% 00:09:31.168 lat (msec) : 2=0.02%, 4=0.02% 00:09:31.168 cpu : usr=2.50%, sys=9.60%, ctx=5481, majf=0, minf=11 00:09:31.168 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:31.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.168 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.168 issued rwts: total=2560,2921,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.168 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:31.168 job3: (groupid=0, jobs=1): err= 0: pid=67904: Thu Jul 25 10:48:00 2024 00:09:31.168 read: IOPS=2036, BW=8148KiB/s (8343kB/s)(8156KiB/1001msec) 00:09:31.168 slat (nsec): min=9141, max=44952, avg=13272.81, stdev=2932.59 00:09:31.168 clat (usec): min=174, max=700, avg=255.47, stdev=20.65 00:09:31.168 lat (usec): min=195, max=714, avg=268.74, stdev=20.84 00:09:31.168 clat percentiles (usec): 00:09:31.168 | 1.00th=[ 223], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 241], 00:09:31.168 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 258], 00:09:31.168 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 289], 00:09:31.168 | 99.00th=[ 310], 99.50th=[ 326], 99.90th=[ 379], 99.95th=[ 424], 00:09:31.168 | 99.99th=[ 701] 00:09:31.168 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:31.168 slat (usec): min=11, max=248, avg=21.58, stdev=10.63 00:09:31.168 clat (nsec): min=1686, max=1552.2k, avg=195880.66, stdev=36449.71 00:09:31.168 lat (usec): min=134, max=1572, avg=217.46, stdev=36.60 00:09:31.168 clat percentiles (usec): 00:09:31.168 | 1.00th=[ 161], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 182], 00:09:31.168 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 198], 00:09:31.168 | 70.00th=[ 204], 80.00th=[ 210], 90.00th=[ 219], 95.00th=[ 227], 00:09:31.168 | 99.00th=[ 249], 99.50th=[ 265], 99.90th=[ 347], 99.95th=[ 416], 00:09:31.168 | 99.99th=[ 1549] 00:09:31.168 bw ( KiB/s): min= 8192, max= 8192, per=19.76%, avg=8192.00, stdev= 0.00, samples=1 00:09:31.168 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:31.168 lat (usec) : 2=0.02%, 4=0.02%, 20=0.02%, 50=0.02%, 250=71.20% 00:09:31.168 lat (usec) : 500=28.65%, 750=0.02% 00:09:31.168 lat (msec) : 2=0.02% 00:09:31.168 cpu : usr=1.60%, sys=5.90%, ctx=4108, majf=0, minf=14 00:09:31.168 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:31.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.168 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.168 issued rwts: total=2039,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.168 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:31.168 00:09:31.168 Run status group 0 (all jobs): 00:09:31.168 READ: bw=37.9MiB/s (39.7MB/s), 8148KiB/s-12.0MiB/s (8343kB/s-12.6MB/s), io=37.9MiB (39.8MB), run=1001-1001msec 00:09:31.168 WRITE: bw=40.5MiB/s (42.4MB/s), 8184KiB/s-13.1MiB/s (8380kB/s-13.7MB/s), io=40.5MiB (42.5MB), run=1001-1001msec 00:09:31.168 00:09:31.168 Disk stats (read/write): 00:09:31.168 nvme0n1: ios=2622/3072, merge=0/0, ticks=438/374, in_queue=812, util=89.18% 00:09:31.168 nvme0n2: ios=1618/2048, merge=0/0, ticks=434/358, in_queue=792, util=89.91% 00:09:31.168 nvme0n3: ios=2232/2560, merge=0/0, ticks=440/371, in_queue=811, util=89.26% 00:09:31.168 nvme0n4: ios=1567/2048, merge=0/0, ticks=383/387, in_queue=770, util=89.83% 00:09:31.168 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:31.168 [global] 00:09:31.168 thread=1 00:09:31.168 invalidate=1 00:09:31.168 rw=write 00:09:31.168 time_based=1 00:09:31.168 runtime=1 00:09:31.168 ioengine=libaio 00:09:31.168 direct=1 00:09:31.168 bs=4096 00:09:31.168 iodepth=128 00:09:31.168 norandommap=0 00:09:31.168 numjobs=1 00:09:31.168 00:09:31.168 verify_dump=1 00:09:31.168 verify_backlog=512 00:09:31.168 verify_state_save=0 00:09:31.168 do_verify=1 00:09:31.168 verify=crc32c-intel 00:09:31.168 [job0] 00:09:31.168 filename=/dev/nvme0n1 00:09:31.168 [job1] 00:09:31.168 filename=/dev/nvme0n2 00:09:31.168 [job2] 00:09:31.168 filename=/dev/nvme0n3 00:09:31.168 [job3] 00:09:31.168 filename=/dev/nvme0n4 00:09:31.168 Could not set queue depth (nvme0n1) 00:09:31.168 Could not set queue depth (nvme0n2) 00:09:31.168 Could not set queue depth (nvme0n3) 00:09:31.168 Could not set queue depth (nvme0n4) 00:09:31.168 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:31.168 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:31.168 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:31.168 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:31.168 fio-3.35 00:09:31.168 Starting 4 threads 00:09:32.596 00:09:32.596 job0: (groupid=0, jobs=1): err= 0: pid=67959: Thu Jul 25 10:48:01 2024 00:09:32.596 read: IOPS=3671, BW=14.3MiB/s (15.0MB/s)(14.4MiB/1005msec) 00:09:32.596 slat (usec): min=4, max=14178, avg=136.61, stdev=850.61 00:09:32.596 clat (usec): min=3835, max=52691, avg=18468.65, stdev=6408.37 00:09:32.596 lat (usec): min=3850, max=52705, avg=18605.26, stdev=6463.90 00:09:32.596 clat percentiles (usec): 00:09:32.596 | 1.00th=[ 4293], 5.00th=[10552], 10.00th=[11076], 20.00th=[13698], 00:09:32.596 | 30.00th=[16909], 40.00th=[17171], 50.00th=[17171], 60.00th=[17695], 00:09:32.596 | 70.00th=[20317], 80.00th=[23725], 90.00th=[25297], 95.00th=[25822], 00:09:32.596 | 99.00th=[45351], 99.50th=[49021], 99.90th=[52691], 99.95th=[52691], 00:09:32.596 | 99.99th=[52691] 00:09:32.596 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:09:32.596 slat (usec): min=4, max=5756, avg=113.77, stdev=576.57 00:09:32.596 clat (usec): min=4594, max=52646, avg=14349.00, stdev=8022.40 00:09:32.596 lat (usec): min=4620, max=52660, avg=14462.77, stdev=8060.23 00:09:32.596 clat percentiles (usec): 00:09:32.596 | 1.00th=[ 7767], 5.00th=[10290], 10.00th=[10421], 20.00th=[10683], 00:09:32.596 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11469], 60.00th=[12780], 00:09:32.596 | 70.00th=[13566], 80.00th=[13960], 90.00th=[18482], 95.00th=[34341], 00:09:32.596 | 99.00th=[49021], 99.50th=[49021], 99.90th=[51119], 99.95th=[51119], 00:09:32.596 | 99.99th=[52691] 00:09:32.596 bw ( KiB/s): min=16216, max=16416, per=28.45%, avg=16316.00, stdev=141.42, samples=2 00:09:32.596 iops : min= 4054, max= 4104, avg=4079.00, stdev=35.36, samples=2 00:09:32.596 lat (msec) : 4=0.10%, 10=3.54%, 20=74.66%, 50=21.44%, 100=0.26% 00:09:32.596 cpu : usr=4.08%, sys=9.76%, ctx=279, majf=0, minf=8 00:09:32.596 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:32.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.596 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:32.596 issued rwts: total=3690,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.596 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:32.596 job1: (groupid=0, jobs=1): err= 0: pid=67960: Thu Jul 25 10:48:01 2024 00:09:32.596 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:09:32.596 slat (usec): min=6, max=7356, avg=154.57, stdev=668.45 00:09:32.596 clat (usec): min=12570, max=39662, avg=19743.14, stdev=3733.61 00:09:32.596 lat (usec): min=13374, max=39685, avg=19897.71, stdev=3795.89 00:09:32.596 clat percentiles (usec): 00:09:32.596 | 1.00th=[13960], 5.00th=[15401], 10.00th=[16581], 20.00th=[16909], 00:09:32.596 | 30.00th=[16909], 40.00th=[17695], 50.00th=[19006], 60.00th=[19530], 00:09:32.596 | 70.00th=[20841], 80.00th=[23200], 90.00th=[24773], 95.00th=[26870], 00:09:32.596 | 99.00th=[31065], 99.50th=[33424], 99.90th=[36963], 99.95th=[36963], 00:09:32.596 | 99.99th=[39584] 00:09:32.596 write: IOPS=3292, BW=12.9MiB/s (13.5MB/s)(12.9MiB/1004msec); 0 zone resets 00:09:32.596 slat (usec): min=12, max=7473, avg=150.50, stdev=648.11 00:09:32.596 clat (usec): min=2722, max=49918, avg=19886.40, stdev=9873.97 00:09:32.596 lat (usec): min=5369, max=49943, avg=20036.90, stdev=9937.86 00:09:32.596 clat percentiles (usec): 00:09:32.596 | 1.00th=[10683], 5.00th=[11338], 10.00th=[11600], 20.00th=[13304], 00:09:32.596 | 30.00th=[13829], 40.00th=[14222], 50.00th=[15926], 60.00th=[17957], 00:09:32.596 | 70.00th=[19792], 80.00th=[27132], 90.00th=[38011], 95.00th=[42730], 00:09:32.596 | 99.00th=[47449], 99.50th=[48497], 99.90th=[50070], 99.95th=[50070], 00:09:32.596 | 99.99th=[50070] 00:09:32.596 bw ( KiB/s): min= 9736, max=15719, per=22.20%, avg=12727.50, stdev=4230.62, samples=2 00:09:32.596 iops : min= 2434, max= 3929, avg=3181.50, stdev=1057.12, samples=2 00:09:32.596 lat (msec) : 4=0.02%, 10=0.25%, 20=67.84%, 50=31.89% 00:09:32.597 cpu : usr=2.99%, sys=10.77%, ctx=301, majf=0, minf=11 00:09:32.597 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:32.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.597 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:32.597 issued rwts: total=3072,3306,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.597 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:32.597 job2: (groupid=0, jobs=1): err= 0: pid=67961: Thu Jul 25 10:48:01 2024 00:09:32.597 read: IOPS=3095, BW=12.1MiB/s (12.7MB/s)(12.1MiB/1003msec) 00:09:32.597 slat (usec): min=10, max=10963, avg=169.47, stdev=1010.50 00:09:32.597 clat (usec): min=685, max=45524, avg=21730.27, stdev=8655.73 00:09:32.597 lat (usec): min=9803, max=45540, avg=21899.75, stdev=8663.40 00:09:32.597 clat percentiles (usec): 00:09:32.597 | 1.00th=[10290], 5.00th=[12125], 10.00th=[12518], 20.00th=[13829], 00:09:32.597 | 30.00th=[16188], 40.00th=[17171], 50.00th=[20055], 60.00th=[20579], 00:09:32.597 | 70.00th=[26608], 80.00th=[27657], 90.00th=[36439], 95.00th=[40109], 00:09:32.597 | 99.00th=[45351], 99.50th=[45351], 99.90th=[45351], 99.95th=[45351], 00:09:32.597 | 99.99th=[45351] 00:09:32.597 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:09:32.597 slat (usec): min=12, max=10487, avg=124.83, stdev=637.64 00:09:32.597 clat (usec): min=9020, max=31596, avg=16528.82, stdev=5242.45 00:09:32.597 lat (usec): min=9535, max=31625, avg=16653.64, stdev=5240.98 00:09:32.597 clat percentiles (usec): 00:09:32.597 | 1.00th=[10159], 5.00th=[11338], 10.00th=[11863], 20.00th=[12256], 00:09:32.597 | 30.00th=[12649], 40.00th=[12911], 50.00th=[14746], 60.00th=[17171], 00:09:32.597 | 70.00th=[18482], 80.00th=[19268], 90.00th=[26870], 95.00th=[27657], 00:09:32.597 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31589], 99.95th=[31589], 00:09:32.597 | 99.99th=[31589] 00:09:32.597 bw ( KiB/s): min=11528, max=16416, per=24.37%, avg=13972.00, stdev=3456.34, samples=2 00:09:32.597 iops : min= 2882, max= 4104, avg=3493.00, stdev=864.08, samples=2 00:09:32.597 lat (usec) : 750=0.01% 00:09:32.597 lat (msec) : 10=0.61%, 20=66.05%, 50=33.32% 00:09:32.597 cpu : usr=3.09%, sys=10.88%, ctx=212, majf=0, minf=5 00:09:32.597 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:32.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.597 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:32.597 issued rwts: total=3105,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.597 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:32.597 job3: (groupid=0, jobs=1): err= 0: pid=67962: Thu Jul 25 10:48:01 2024 00:09:32.597 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:09:32.597 slat (usec): min=5, max=8388, avg=174.72, stdev=936.16 00:09:32.597 clat (usec): min=10930, max=34431, avg=22249.64, stdev=5912.06 00:09:32.597 lat (usec): min=13248, max=34455, avg=22424.36, stdev=5888.05 00:09:32.597 clat percentiles (usec): 00:09:32.597 | 1.00th=[13304], 5.00th=[14353], 10.00th=[14877], 20.00th=[15533], 00:09:32.597 | 30.00th=[17957], 40.00th=[21365], 50.00th=[22414], 60.00th=[22938], 00:09:32.597 | 70.00th=[25297], 80.00th=[26608], 90.00th=[31327], 95.00th=[33817], 00:09:32.597 | 99.00th=[33817], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:09:32.597 | 99.99th=[34341] 00:09:32.597 write: IOPS=3410, BW=13.3MiB/s (14.0MB/s)(13.4MiB/1003msec); 0 zone resets 00:09:32.597 slat (usec): min=10, max=7959, avg=128.14, stdev=616.99 00:09:32.597 clat (usec): min=2020, max=27509, avg=16936.74, stdev=4809.90 00:09:32.597 lat (usec): min=2045, max=27542, avg=17064.88, stdev=4795.50 00:09:32.597 clat percentiles (usec): 00:09:32.597 | 1.00th=[ 5473], 5.00th=[12125], 10.00th=[12387], 20.00th=[12649], 00:09:32.597 | 30.00th=[13435], 40.00th=[15139], 50.00th=[16057], 60.00th=[16581], 00:09:32.597 | 70.00th=[19006], 80.00th=[21627], 90.00th=[24773], 95.00th=[25560], 00:09:32.597 | 99.00th=[26608], 99.50th=[27395], 99.90th=[27395], 99.95th=[27395], 00:09:32.597 | 99.99th=[27395] 00:09:32.597 bw ( KiB/s): min=12288, max=14064, per=22.98%, avg=13176.00, stdev=1255.82, samples=2 00:09:32.597 iops : min= 3072, max= 3516, avg=3294.00, stdev=313.96, samples=2 00:09:32.597 lat (msec) : 4=0.45%, 10=1.06%, 20=53.30%, 50=45.19% 00:09:32.597 cpu : usr=2.50%, sys=9.58%, ctx=205, majf=0, minf=13 00:09:32.597 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:09:32.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.597 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:32.597 issued rwts: total=3072,3421,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.597 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:32.597 00:09:32.597 Run status group 0 (all jobs): 00:09:32.597 READ: bw=50.3MiB/s (52.7MB/s), 12.0MiB/s-14.3MiB/s (12.5MB/s-15.0MB/s), io=50.5MiB (53.0MB), run=1003-1005msec 00:09:32.597 WRITE: bw=56.0MiB/s (58.7MB/s), 12.9MiB/s-15.9MiB/s (13.5MB/s-16.7MB/s), io=56.3MiB (59.0MB), run=1003-1005msec 00:09:32.597 00:09:32.597 Disk stats (read/write): 00:09:32.597 nvme0n1: ios=3122/3519, merge=0/0, ticks=48910/39531, in_queue=88441, util=87.07% 00:09:32.597 nvme0n2: ios=2595/3063, merge=0/0, ticks=16640/16732, in_queue=33372, util=88.04% 00:09:32.597 nvme0n3: ios=2816/3072, merge=0/0, ticks=14279/10134, in_queue=24413, util=89.12% 00:09:32.597 nvme0n4: ios=2560/2848, merge=0/0, ticks=14491/10939, in_queue=25430, util=89.78% 00:09:32.597 10:48:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:32.597 [global] 00:09:32.597 thread=1 00:09:32.597 invalidate=1 00:09:32.597 rw=randwrite 00:09:32.597 time_based=1 00:09:32.597 runtime=1 00:09:32.597 ioengine=libaio 00:09:32.597 direct=1 00:09:32.597 bs=4096 00:09:32.597 iodepth=128 00:09:32.597 norandommap=0 00:09:32.597 numjobs=1 00:09:32.597 00:09:32.597 verify_dump=1 00:09:32.597 verify_backlog=512 00:09:32.597 verify_state_save=0 00:09:32.597 do_verify=1 00:09:32.597 verify=crc32c-intel 00:09:32.597 [job0] 00:09:32.597 filename=/dev/nvme0n1 00:09:32.597 [job1] 00:09:32.597 filename=/dev/nvme0n2 00:09:32.597 [job2] 00:09:32.597 filename=/dev/nvme0n3 00:09:32.597 [job3] 00:09:32.597 filename=/dev/nvme0n4 00:09:32.597 Could not set queue depth (nvme0n1) 00:09:32.597 Could not set queue depth (nvme0n2) 00:09:32.597 Could not set queue depth (nvme0n3) 00:09:32.597 Could not set queue depth (nvme0n4) 00:09:32.597 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:32.597 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:32.597 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:32.597 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:32.597 fio-3.35 00:09:32.597 Starting 4 threads 00:09:33.973 00:09:33.973 job0: (groupid=0, jobs=1): err= 0: pid=68019: Thu Jul 25 10:48:03 2024 00:09:33.973 read: IOPS=1640, BW=6562KiB/s (6719kB/s)(6588KiB/1004msec) 00:09:33.974 slat (usec): min=8, max=12445, avg=250.08, stdev=1089.50 00:09:33.974 clat (usec): min=1520, max=61526, avg=29093.15, stdev=8653.52 00:09:33.974 lat (usec): min=7714, max=61548, avg=29343.23, stdev=8755.27 00:09:33.974 clat percentiles (usec): 00:09:33.974 | 1.00th=[ 8029], 5.00th=[19268], 10.00th=[20055], 20.00th=[20317], 00:09:33.974 | 30.00th=[23200], 40.00th=[26084], 50.00th=[27657], 60.00th=[29754], 00:09:33.974 | 70.00th=[34866], 80.00th=[38011], 90.00th=[39060], 95.00th=[42206], 00:09:33.974 | 99.00th=[52167], 99.50th=[56886], 99.90th=[56886], 99.95th=[61604], 00:09:33.974 | 99.99th=[61604] 00:09:33.974 write: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec); 0 zone resets 00:09:33.974 slat (usec): min=17, max=8191, avg=279.15, stdev=1033.88 00:09:33.974 clat (usec): min=16781, max=78626, avg=38251.60, stdev=18799.32 00:09:33.974 lat (usec): min=16807, max=78676, avg=38530.74, stdev=18923.97 00:09:33.974 clat percentiles (usec): 00:09:33.974 | 1.00th=[17433], 5.00th=[19006], 10.00th=[19268], 20.00th=[19792], 00:09:33.974 | 30.00th=[21103], 40.00th=[22938], 50.00th=[31327], 60.00th=[47449], 00:09:33.974 | 70.00th=[49021], 80.00th=[56886], 90.00th=[65799], 95.00th=[70779], 00:09:33.974 | 99.00th=[78119], 99.50th=[78119], 99.90th=[78119], 99.95th=[78119], 00:09:33.974 | 99.99th=[78119] 00:09:33.974 bw ( KiB/s): min= 8056, max= 8192, per=15.81%, avg=8124.00, stdev=96.17, samples=2 00:09:33.974 iops : min= 2014, max= 2048, avg=2031.00, stdev=24.04, samples=2 00:09:33.974 lat (msec) : 2=0.03%, 10=0.70%, 20=15.81%, 50=67.39%, 100=16.08% 00:09:33.974 cpu : usr=1.79%, sys=7.08%, ctx=230, majf=0, minf=13 00:09:33.974 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:09:33.974 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.974 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:33.974 issued rwts: total=1647,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.974 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:33.974 job1: (groupid=0, jobs=1): err= 0: pid=68021: Thu Jul 25 10:48:03 2024 00:09:33.974 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:09:33.974 slat (usec): min=6, max=3986, avg=103.13, stdev=491.49 00:09:33.974 clat (usec): min=8395, max=16679, avg=13854.01, stdev=1533.84 00:09:33.974 lat (usec): min=10381, max=16694, avg=13957.14, stdev=1464.51 00:09:33.974 clat percentiles (usec): 00:09:33.974 | 1.00th=[10552], 5.00th=[10814], 10.00th=[11338], 20.00th=[12649], 00:09:33.974 | 30.00th=[13304], 40.00th=[13829], 50.00th=[14222], 60.00th=[14484], 00:09:33.974 | 70.00th=[14877], 80.00th=[15139], 90.00th=[15533], 95.00th=[15664], 00:09:33.974 | 99.00th=[16057], 99.50th=[16581], 99.90th=[16712], 99.95th=[16712], 00:09:33.974 | 99.99th=[16712] 00:09:33.974 write: IOPS=4695, BW=18.3MiB/s (19.2MB/s)(18.4MiB/1002msec); 0 zone resets 00:09:33.974 slat (usec): min=11, max=4422, avg=103.48, stdev=452.71 00:09:33.974 clat (usec): min=446, max=16483, avg=13326.88, stdev=1908.86 00:09:33.974 lat (usec): min=3105, max=16498, avg=13430.36, stdev=1864.65 00:09:33.974 clat percentiles (usec): 00:09:33.974 | 1.00th=[ 7242], 5.00th=[10552], 10.00th=[10683], 20.00th=[11076], 00:09:33.974 | 30.00th=[13173], 40.00th=[13698], 50.00th=[13960], 60.00th=[14222], 00:09:33.974 | 70.00th=[14484], 80.00th=[14746], 90.00th=[15008], 95.00th=[15533], 00:09:33.974 | 99.00th=[16057], 99.50th=[16450], 99.90th=[16450], 99.95th=[16450], 00:09:33.974 | 99.99th=[16450] 00:09:33.974 bw ( KiB/s): min=20480, max=20480, per=39.86%, avg=20480.00, stdev= 0.00, samples=1 00:09:33.974 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:09:33.974 lat (usec) : 500=0.01% 00:09:33.974 lat (msec) : 4=0.33%, 10=0.99%, 20=98.67% 00:09:33.974 cpu : usr=3.50%, sys=14.59%, ctx=293, majf=0, minf=1 00:09:33.974 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:33.974 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.974 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:33.974 issued rwts: total=4608,4705,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.974 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:33.974 job2: (groupid=0, jobs=1): err= 0: pid=68027: Thu Jul 25 10:48:03 2024 00:09:33.974 read: IOPS=2423, BW=9693KiB/s (9926kB/s)(9732KiB/1004msec) 00:09:33.974 slat (usec): min=7, max=15368, avg=222.78, stdev=1205.41 00:09:33.974 clat (usec): min=499, max=56370, avg=27815.05, stdev=8760.25 00:09:33.974 lat (usec): min=7572, max=56391, avg=28037.84, stdev=8738.98 00:09:33.974 clat percentiles (usec): 00:09:33.974 | 1.00th=[ 8094], 5.00th=[19792], 10.00th=[21890], 20.00th=[22938], 00:09:33.974 | 30.00th=[23987], 40.00th=[24249], 50.00th=[25035], 60.00th=[26084], 00:09:33.974 | 70.00th=[27132], 80.00th=[30278], 90.00th=[40109], 95.00th=[50070], 00:09:33.974 | 99.00th=[56361], 99.50th=[56361], 99.90th=[56361], 99.95th=[56361], 00:09:33.974 | 99.99th=[56361] 00:09:33.974 write: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec); 0 zone resets 00:09:33.974 slat (usec): min=14, max=7514, avg=170.97, stdev=818.93 00:09:33.974 clat (usec): min=14095, max=32573, avg=22967.34, stdev=3469.48 00:09:33.974 lat (usec): min=17012, max=32624, avg=23138.31, stdev=3379.93 00:09:33.974 clat percentiles (usec): 00:09:33.974 | 1.00th=[16057], 5.00th=[18482], 10.00th=[19006], 20.00th=[19530], 00:09:33.974 | 30.00th=[20055], 40.00th=[21103], 50.00th=[22938], 60.00th=[25035], 00:09:33.974 | 70.00th=[25560], 80.00th=[26084], 90.00th=[26346], 95.00th=[27919], 00:09:33.974 | 99.00th=[32375], 99.50th=[32375], 99.90th=[32637], 99.95th=[32637], 00:09:33.974 | 99.99th=[32637] 00:09:33.974 bw ( KiB/s): min= 8192, max=12288, per=19.93%, avg=10240.00, stdev=2896.31, samples=2 00:09:33.974 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:09:33.974 lat (usec) : 500=0.02% 00:09:33.974 lat (msec) : 10=0.64%, 20=17.56%, 50=79.91%, 100=1.86% 00:09:33.974 cpu : usr=2.59%, sys=8.47%, ctx=157, majf=0, minf=5 00:09:33.974 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:09:33.974 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.974 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:33.974 issued rwts: total=2433,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.974 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:33.974 job3: (groupid=0, jobs=1): err= 0: pid=68029: Thu Jul 25 10:48:03 2024 00:09:33.974 read: IOPS=3124, BW=12.2MiB/s (12.8MB/s)(12.3MiB/1004msec) 00:09:33.974 slat (usec): min=6, max=6922, avg=153.23, stdev=779.42 00:09:33.974 clat (usec): min=450, max=27100, avg=19864.76, stdev=4355.48 00:09:33.974 lat (usec): min=4192, max=27142, avg=20017.99, stdev=4310.29 00:09:33.974 clat percentiles (usec): 00:09:33.974 | 1.00th=[ 4948], 5.00th=[12387], 10.00th=[12780], 20.00th=[17433], 00:09:33.974 | 30.00th=[19530], 40.00th=[19792], 50.00th=[20317], 60.00th=[21103], 00:09:33.974 | 70.00th=[21890], 80.00th=[23987], 90.00th=[25035], 95.00th=[25297], 00:09:33.974 | 99.00th=[26084], 99.50th=[26870], 99.90th=[27132], 99.95th=[27132], 00:09:33.974 | 99.99th=[27132] 00:09:33.974 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:09:33.974 slat (usec): min=11, max=5961, avg=137.70, stdev=647.57 00:09:33.974 clat (usec): min=9032, max=24988, avg=18006.36, stdev=4157.50 00:09:33.974 lat (usec): min=10081, max=26064, avg=18144.06, stdev=4140.25 00:09:33.974 clat percentiles (usec): 00:09:33.974 | 1.00th=[11207], 5.00th=[11600], 10.00th=[11863], 20.00th=[12387], 00:09:33.974 | 30.00th=[16188], 40.00th=[18744], 50.00th=[19268], 60.00th=[19530], 00:09:33.974 | 70.00th=[19792], 80.00th=[20841], 90.00th=[23725], 95.00th=[24249], 00:09:33.974 | 99.00th=[24773], 99.50th=[24773], 99.90th=[25035], 99.95th=[25035], 00:09:33.974 | 99.99th=[25035] 00:09:33.974 bw ( KiB/s): min=13056, max=15112, per=27.41%, avg=14084.00, stdev=1453.81, samples=2 00:09:33.974 iops : min= 3264, max= 3778, avg=3521.00, stdev=363.45, samples=2 00:09:33.974 lat (usec) : 500=0.01% 00:09:33.974 lat (msec) : 10=1.26%, 20=58.12%, 50=40.60% 00:09:33.974 cpu : usr=2.19%, sys=11.96%, ctx=212, majf=0, minf=2 00:09:33.974 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:33.974 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.974 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:33.974 issued rwts: total=3137,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.974 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:33.974 00:09:33.974 Run status group 0 (all jobs): 00:09:33.974 READ: bw=46.0MiB/s (48.2MB/s), 6562KiB/s-18.0MiB/s (6719kB/s-18.8MB/s), io=46.2MiB (48.4MB), run=1002-1004msec 00:09:33.974 WRITE: bw=50.2MiB/s (52.6MB/s), 8159KiB/s-18.3MiB/s (8355kB/s-19.2MB/s), io=50.4MiB (52.8MB), run=1002-1004msec 00:09:33.974 00:09:33.974 Disk stats (read/write): 00:09:33.974 nvme0n1: ios=1409/1536, merge=0/0, ticks=13680/21063, in_queue=34743, util=88.58% 00:09:33.974 nvme0n2: ios=4039/4096, merge=0/0, ticks=12373/11906, in_queue=24279, util=88.16% 00:09:33.974 nvme0n3: ios=2048/2496, merge=0/0, ticks=12488/12468, in_queue=24956, util=89.14% 00:09:33.974 nvme0n4: ios=2752/3072, merge=0/0, ticks=12631/12043, in_queue=24674, util=89.70% 00:09:33.974 10:48:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:33.974 10:48:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=68043 00:09:33.974 10:48:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:33.974 10:48:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:33.974 [global] 00:09:33.974 thread=1 00:09:33.974 invalidate=1 00:09:33.974 rw=read 00:09:33.974 time_based=1 00:09:33.974 runtime=10 00:09:33.974 ioengine=libaio 00:09:33.974 direct=1 00:09:33.974 bs=4096 00:09:33.974 iodepth=1 00:09:33.974 norandommap=1 00:09:33.974 numjobs=1 00:09:33.974 00:09:33.974 [job0] 00:09:33.974 filename=/dev/nvme0n1 00:09:33.974 [job1] 00:09:33.974 filename=/dev/nvme0n2 00:09:33.974 [job2] 00:09:33.974 filename=/dev/nvme0n3 00:09:33.974 [job3] 00:09:33.974 filename=/dev/nvme0n4 00:09:33.974 Could not set queue depth (nvme0n1) 00:09:33.974 Could not set queue depth (nvme0n2) 00:09:33.974 Could not set queue depth (nvme0n3) 00:09:33.974 Could not set queue depth (nvme0n4) 00:09:33.974 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:33.975 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:33.975 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:33.975 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:33.975 fio-3.35 00:09:33.975 Starting 4 threads 00:09:37.263 10:48:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:37.263 fio: pid=68086, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:37.263 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=50184192, buflen=4096 00:09:37.263 10:48:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:37.263 fio: pid=68085, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:37.263 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=56582144, buflen=4096 00:09:37.263 10:48:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:37.263 10:48:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:37.831 fio: pid=68083, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:37.831 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=58093568, buflen=4096 00:09:37.831 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:37.831 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:37.831 fio: pid=68084, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:37.831 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=323584, buflen=4096 00:09:37.831 00:09:37.831 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68083: Thu Jul 25 10:48:07 2024 00:09:37.831 read: IOPS=3932, BW=15.4MiB/s (16.1MB/s)(55.4MiB/3607msec) 00:09:37.831 slat (usec): min=8, max=10224, avg=19.70, stdev=154.04 00:09:37.831 clat (usec): min=131, max=1968, avg=233.26, stdev=46.48 00:09:37.831 lat (usec): min=145, max=10464, avg=252.96, stdev=161.92 00:09:37.831 clat percentiles (usec): 00:09:37.831 | 1.00th=[ 149], 5.00th=[ 163], 10.00th=[ 186], 20.00th=[ 204], 00:09:37.831 | 30.00th=[ 215], 40.00th=[ 223], 50.00th=[ 231], 60.00th=[ 239], 00:09:37.831 | 70.00th=[ 249], 80.00th=[ 260], 90.00th=[ 285], 95.00th=[ 306], 00:09:37.831 | 99.00th=[ 347], 99.50th=[ 379], 99.90th=[ 537], 99.95th=[ 652], 00:09:37.831 | 99.99th=[ 1352] 00:09:37.831 bw ( KiB/s): min=15024, max=17520, per=27.26%, avg=16000.00, stdev=853.67, samples=6 00:09:37.831 iops : min= 3756, max= 4380, avg=4000.00, stdev=213.42, samples=6 00:09:37.831 lat (usec) : 250=72.02%, 500=27.83%, 750=0.11%, 1000=0.01% 00:09:37.831 lat (msec) : 2=0.02% 00:09:37.831 cpu : usr=1.11%, sys=5.85%, ctx=14194, majf=0, minf=1 00:09:37.831 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:37.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.831 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.831 issued rwts: total=14184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:37.831 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:37.831 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68084: Thu Jul 25 10:48:07 2024 00:09:37.831 read: IOPS=4259, BW=16.6MiB/s (17.4MB/s)(64.3MiB/3865msec) 00:09:37.831 slat (usec): min=8, max=14828, avg=17.74, stdev=184.08 00:09:37.831 clat (usec): min=124, max=7731, avg=215.67, stdev=105.28 00:09:37.831 lat (usec): min=136, max=15068, avg=233.41, stdev=212.69 00:09:37.831 clat percentiles (usec): 00:09:37.831 | 1.00th=[ 135], 5.00th=[ 145], 10.00th=[ 157], 20.00th=[ 176], 00:09:37.831 | 30.00th=[ 192], 40.00th=[ 202], 50.00th=[ 212], 60.00th=[ 223], 00:09:37.831 | 70.00th=[ 233], 80.00th=[ 247], 90.00th=[ 265], 95.00th=[ 289], 00:09:37.831 | 99.00th=[ 334], 99.50th=[ 355], 99.90th=[ 701], 99.95th=[ 1418], 00:09:37.831 | 99.99th=[ 6063] 00:09:37.831 bw ( KiB/s): min=13970, max=18328, per=28.39%, avg=16663.14, stdev=1422.04, samples=7 00:09:37.831 iops : min= 3492, max= 4582, avg=4165.71, stdev=355.67, samples=7 00:09:37.831 lat (usec) : 250=82.09%, 500=17.72%, 750=0.10%, 1000=0.01% 00:09:37.831 lat (msec) : 2=0.04%, 4=0.02%, 10=0.02% 00:09:37.831 cpu : usr=1.29%, sys=5.49%, ctx=16481, majf=0, minf=1 00:09:37.831 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:37.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.831 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.831 issued rwts: total=16464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:37.831 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:37.831 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68085: Thu Jul 25 10:48:07 2024 00:09:37.831 read: IOPS=4218, BW=16.5MiB/s (17.3MB/s)(54.0MiB/3275msec) 00:09:37.831 slat (usec): min=12, max=10911, avg=16.87, stdev=114.03 00:09:37.831 clat (usec): min=110, max=2364, avg=218.78, stdev=56.17 00:09:37.831 lat (usec): min=157, max=11098, avg=235.65, stdev=126.97 00:09:37.831 clat percentiles (usec): 00:09:37.831 | 1.00th=[ 151], 5.00th=[ 159], 10.00th=[ 167], 20.00th=[ 180], 00:09:37.831 | 30.00th=[ 190], 40.00th=[ 204], 50.00th=[ 217], 60.00th=[ 227], 00:09:37.831 | 70.00th=[ 239], 80.00th=[ 253], 90.00th=[ 273], 95.00th=[ 293], 00:09:37.831 | 99.00th=[ 330], 99.50th=[ 347], 99.90th=[ 420], 99.95th=[ 701], 00:09:37.831 | 99.99th=[ 2245] 00:09:37.831 bw ( KiB/s): min=15688, max=18088, per=28.33%, avg=16629.33, stdev=1007.47, samples=6 00:09:37.831 iops : min= 3922, max= 4522, avg=4157.33, stdev=251.87, samples=6 00:09:37.831 lat (usec) : 250=78.45%, 500=21.46%, 750=0.04%, 1000=0.01% 00:09:37.831 lat (msec) : 2=0.01%, 4=0.03% 00:09:37.831 cpu : usr=1.34%, sys=5.47%, ctx=13818, majf=0, minf=1 00:09:37.831 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:37.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.831 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.831 issued rwts: total=13815,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:37.831 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:37.831 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68086: Thu Jul 25 10:48:07 2024 00:09:37.831 read: IOPS=4124, BW=16.1MiB/s (16.9MB/s)(47.9MiB/2971msec) 00:09:37.831 slat (usec): min=11, max=114, avg=15.56, stdev= 4.70 00:09:37.831 clat (usec): min=142, max=2269, avg=225.46, stdev=53.20 00:09:37.831 lat (usec): min=158, max=2295, avg=241.02, stdev=52.89 00:09:37.831 clat percentiles (usec): 00:09:37.831 | 1.00th=[ 159], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 186], 00:09:37.831 | 30.00th=[ 198], 40.00th=[ 210], 50.00th=[ 223], 60.00th=[ 233], 00:09:37.831 | 70.00th=[ 243], 80.00th=[ 258], 90.00th=[ 281], 95.00th=[ 302], 00:09:37.831 | 99.00th=[ 343], 99.50th=[ 363], 99.90th=[ 570], 99.95th=[ 693], 00:09:37.831 | 99.99th=[ 2114] 00:09:37.831 bw ( KiB/s): min=15696, max=17672, per=27.84%, avg=16339.20, stdev=792.64, samples=5 00:09:37.831 iops : min= 3924, max= 4418, avg=4084.80, stdev=198.16, samples=5 00:09:37.831 lat (usec) : 250=74.77%, 500=25.01%, 750=0.17%, 1000=0.02% 00:09:37.831 lat (msec) : 2=0.02%, 4=0.02% 00:09:37.831 cpu : usr=1.62%, sys=5.56%, ctx=12260, majf=0, minf=1 00:09:37.831 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:37.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.832 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.832 issued rwts: total=12253,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:37.832 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:37.832 00:09:37.832 Run status group 0 (all jobs): 00:09:37.832 READ: bw=57.3MiB/s (60.1MB/s), 15.4MiB/s-16.6MiB/s (16.1MB/s-17.4MB/s), io=222MiB (232MB), run=2971-3865msec 00:09:37.832 00:09:37.832 Disk stats (read/write): 00:09:37.832 nvme0n1: ios=13235/0, merge=0/0, ticks=3111/0, in_queue=3111, util=95.39% 00:09:37.832 nvme0n2: ios=14968/0, merge=0/0, ticks=3283/0, in_queue=3283, util=95.45% 00:09:37.832 nvme0n3: ios=12947/0, merge=0/0, ticks=2939/0, in_queue=2939, util=96.27% 00:09:37.832 nvme0n4: ios=11709/0, merge=0/0, ticks=2689/0, in_queue=2689, util=96.76% 00:09:38.117 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:38.117 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:38.375 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:38.375 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:38.634 10:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:38.634 10:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:38.893 10:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:38.893 10:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:39.151 10:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:39.151 10:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:39.409 10:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:39.409 10:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 68043 00:09:39.409 10:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:39.409 10:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:39.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.409 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:39.409 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:09:39.409 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:39.409 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:39.409 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:39.409 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:39.409 nvmf hotplug test: fio failed as expected 00:09:39.409 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:09:39.409 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:39.409 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:39.409 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:39.666 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:39.666 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:39.666 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:39.666 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:39.666 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:39.666 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:39.666 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:09:39.666 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:39.666 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:09:39.666 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:39.666 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:39.666 rmmod nvme_tcp 00:09:39.666 rmmod nvme_fabrics 00:09:39.666 rmmod nvme_keyring 00:09:39.924 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:39.924 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:09:39.924 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:09:39.924 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 67654 ']' 00:09:39.924 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 67654 00:09:39.924 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 67654 ']' 00:09:39.924 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 67654 00:09:39.924 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:09:39.924 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:39.924 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67654 00:09:39.924 killing process with pid 67654 00:09:39.924 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:39.924 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:39.924 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67654' 00:09:39.924 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 67654 00:09:39.924 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 67654 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:40.184 00:09:40.184 real 0m19.984s 00:09:40.184 user 1m15.403s 00:09:40.184 sys 0m10.400s 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:40.184 ************************************ 00:09:40.184 END TEST nvmf_fio_target 00:09:40.184 ************************************ 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:40.184 ************************************ 00:09:40.184 START TEST nvmf_bdevio 00:09:40.184 ************************************ 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:40.184 * Looking for test storage... 00:09:40.184 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=bb4b8bd3-cfb4-4368-bf29-91254747069c 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:40.184 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:40.185 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:40.185 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:40.185 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:40.185 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:40.185 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:40.185 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:40.185 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:40.185 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:40.185 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:40.185 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:40.185 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:40.185 Cannot find device "nvmf_tgt_br" 00:09:40.185 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:09:40.185 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:40.185 Cannot find device "nvmf_tgt_br2" 00:09:40.185 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:09:40.185 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:40.185 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:40.185 Cannot find device "nvmf_tgt_br" 00:09:40.185 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:09:40.185 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:40.443 Cannot find device "nvmf_tgt_br2" 00:09:40.443 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:09:40.443 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:40.443 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:40.443 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:40.443 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:40.443 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:09:40.443 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:40.443 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:40.443 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:09:40.443 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:40.443 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:40.443 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:40.443 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:40.443 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:40.443 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:40.443 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:40.443 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:40.443 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:40.443 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:40.443 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:40.443 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:40.443 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:40.443 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:40.443 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:40.443 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:40.443 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:40.443 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:40.443 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:40.443 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:40.443 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:40.443 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:40.443 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:40.443 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:40.443 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:40.443 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:09:40.443 00:09:40.443 --- 10.0.0.2 ping statistics --- 00:09:40.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.443 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:09:40.443 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:40.443 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:40.443 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:09:40.443 00:09:40.443 --- 10.0.0.3 ping statistics --- 00:09:40.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.443 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:09:40.443 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:40.443 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:40.443 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:09:40.443 00:09:40.443 --- 10.0.0.1 ping statistics --- 00:09:40.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.443 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:09:40.443 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:40.443 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:09:40.443 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:40.443 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:40.443 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:40.443 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:40.443 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:40.443 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:40.443 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:40.443 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:40.443 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:40.443 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:40.443 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:40.702 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=68349 00:09:40.702 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:40.702 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 68349 00:09:40.702 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 68349 ']' 00:09:40.702 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.702 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:40.702 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.702 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:40.702 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:40.702 [2024-07-25 10:48:10.240089] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:40.702 [2024-07-25 10:48:10.240179] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:40.702 [2024-07-25 10:48:10.379289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:40.960 [2024-07-25 10:48:10.496919] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:40.960 [2024-07-25 10:48:10.496974] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:40.960 [2024-07-25 10:48:10.496985] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:40.960 [2024-07-25 10:48:10.496994] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:40.960 [2024-07-25 10:48:10.497001] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:40.960 [2024-07-25 10:48:10.497114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:40.960 [2024-07-25 10:48:10.498189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:40.960 [2024-07-25 10:48:10.498283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:40.960 [2024-07-25 10:48:10.498283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:40.960 [2024-07-25 10:48:10.551426] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:41.527 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:41.527 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:09:41.527 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:41.527 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:41.527 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:41.787 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:41.787 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:41.787 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.787 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:41.787 [2024-07-25 10:48:11.287555] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:41.787 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.787 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:41.787 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.787 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:41.787 Malloc0 00:09:41.787 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.787 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:41.787 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.787 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:41.787 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.787 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:41.787 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.787 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:41.787 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.787 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:41.787 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.787 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:41.787 [2024-07-25 10:48:11.366914] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:41.787 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.787 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:41.787 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:41.787 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:09:41.787 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:09:41.787 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:41.787 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:41.787 { 00:09:41.787 "params": { 00:09:41.787 "name": "Nvme$subsystem", 00:09:41.787 "trtype": "$TEST_TRANSPORT", 00:09:41.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:41.787 "adrfam": "ipv4", 00:09:41.787 "trsvcid": "$NVMF_PORT", 00:09:41.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:41.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:41.787 "hdgst": ${hdgst:-false}, 00:09:41.787 "ddgst": ${ddgst:-false} 00:09:41.787 }, 00:09:41.787 "method": "bdev_nvme_attach_controller" 00:09:41.787 } 00:09:41.787 EOF 00:09:41.787 )") 00:09:41.787 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:09:41.787 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:09:41.787 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:09:41.787 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:41.787 "params": { 00:09:41.787 "name": "Nvme1", 00:09:41.787 "trtype": "tcp", 00:09:41.787 "traddr": "10.0.0.2", 00:09:41.787 "adrfam": "ipv4", 00:09:41.787 "trsvcid": "4420", 00:09:41.787 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:41.787 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:41.787 "hdgst": false, 00:09:41.787 "ddgst": false 00:09:41.787 }, 00:09:41.787 "method": "bdev_nvme_attach_controller" 00:09:41.787 }' 00:09:41.787 [2024-07-25 10:48:11.421247] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:41.787 [2024-07-25 10:48:11.421342] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68391 ] 00:09:42.045 [2024-07-25 10:48:11.559208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:42.045 [2024-07-25 10:48:11.687493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.045 [2024-07-25 10:48:11.687632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:42.045 [2024-07-25 10:48:11.687638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.045 [2024-07-25 10:48:11.752292] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:42.304 I/O targets: 00:09:42.304 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:42.304 00:09:42.304 00:09:42.304 CUnit - A unit testing framework for C - Version 2.1-3 00:09:42.304 http://cunit.sourceforge.net/ 00:09:42.304 00:09:42.304 00:09:42.304 Suite: bdevio tests on: Nvme1n1 00:09:42.304 Test: blockdev write read block ...passed 00:09:42.304 Test: blockdev write zeroes read block ...passed 00:09:42.304 Test: blockdev write zeroes read no split ...passed 00:09:42.304 Test: blockdev write zeroes read split ...passed 00:09:42.304 Test: blockdev write zeroes read split partial ...passed 00:09:42.304 Test: blockdev reset ...[2024-07-25 10:48:11.905371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:09:42.304 [2024-07-25 10:48:11.905478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14db7c0 (9): Bad file descriptor 00:09:42.304 [2024-07-25 10:48:11.923498] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:42.304 passed 00:09:42.304 Test: blockdev write read 8 blocks ...passed 00:09:42.304 Test: blockdev write read size > 128k ...passed 00:09:42.304 Test: blockdev write read invalid size ...passed 00:09:42.304 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:42.304 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:42.304 Test: blockdev write read max offset ...passed 00:09:42.304 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:42.304 Test: blockdev writev readv 8 blocks ...passed 00:09:42.304 Test: blockdev writev readv 30 x 1block ...passed 00:09:42.304 Test: blockdev writev readv block ...passed 00:09:42.304 Test: blockdev writev readv size > 128k ...passed 00:09:42.304 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:42.304 Test: blockdev comparev and writev ...[2024-07-25 10:48:11.933578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:42.304 [2024-07-25 10:48:11.933628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:42.304 [2024-07-25 10:48:11.933652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:42.304 [2024-07-25 10:48:11.933666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:42.304 [2024-07-25 10:48:11.934215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:42.304 [2024-07-25 10:48:11.934258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:42.304 [2024-07-25 10:48:11.934280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:42.304 [2024-07-25 10:48:11.934294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:42.304 [2024-07-25 10:48:11.934914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:42.304 [2024-07-25 10:48:11.934954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:42.304 [2024-07-25 10:48:11.934976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:42.304 [2024-07-25 10:48:11.934989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:42.304 [2024-07-25 10:48:11.935584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:42.304 [2024-07-25 10:48:11.935615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:42.304 [2024-07-25 10:48:11.935636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:42.304 [2024-07-25 10:48:11.935650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:42.304 passed 00:09:42.304 Test: blockdev nvme passthru rw ...passed 00:09:42.304 Test: blockdev nvme passthru vendor specific ...[2024-07-25 10:48:11.936352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:42.304 [2024-07-25 10:48:11.936380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:42.304 [2024-07-25 10:48:11.936499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:42.304 [2024-07-25 10:48:11.936519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:42.304 [2024-07-25 10:48:11.936646] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:42.304 [2024-07-25 10:48:11.936680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:42.304 passed 00:09:42.304 Test: blockdev nvme admin passthru ...[2024-07-25 10:48:11.936790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:42.304 [2024-07-25 10:48:11.936809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:42.304 passed 00:09:42.304 Test: blockdev copy ...passed 00:09:42.304 00:09:42.304 Run Summary: Type Total Ran Passed Failed Inactive 00:09:42.304 suites 1 1 n/a 0 0 00:09:42.304 tests 23 23 23 0 0 00:09:42.304 asserts 152 152 152 0 n/a 00:09:42.304 00:09:42.304 Elapsed time = 0.161 seconds 00:09:42.563 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:42.563 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.563 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:42.563 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.563 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:42.563 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:42.563 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:42.563 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:09:42.563 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:42.563 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:09:42.563 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:42.563 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:42.563 rmmod nvme_tcp 00:09:42.563 rmmod nvme_fabrics 00:09:42.563 rmmod nvme_keyring 00:09:42.563 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:42.563 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:09:42.563 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:09:42.563 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 68349 ']' 00:09:42.563 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 68349 00:09:42.563 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 68349 ']' 00:09:42.563 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 68349 00:09:42.563 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:09:42.563 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:42.563 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68349 00:09:42.563 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:09:42.563 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:09:42.563 killing process with pid 68349 00:09:42.563 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68349' 00:09:42.563 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 68349 00:09:42.563 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 68349 00:09:42.822 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:42.822 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:42.822 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:42.822 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:42.822 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:42.822 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.822 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:42.822 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.080 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:43.080 00:09:43.080 real 0m2.817s 00:09:43.080 user 0m9.474s 00:09:43.080 sys 0m0.771s 00:09:43.080 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:43.080 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:43.080 ************************************ 00:09:43.080 END TEST nvmf_bdevio 00:09:43.080 ************************************ 00:09:43.080 10:48:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:43.080 00:09:43.080 real 2m33.370s 00:09:43.080 user 6m50.244s 00:09:43.080 sys 0m52.601s 00:09:43.080 10:48:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:43.080 10:48:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:43.080 ************************************ 00:09:43.080 END TEST nvmf_target_core 00:09:43.080 ************************************ 00:09:43.080 10:48:12 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:43.080 10:48:12 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:43.080 10:48:12 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:43.080 10:48:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:43.080 ************************************ 00:09:43.080 START TEST nvmf_target_extra 00:09:43.080 ************************************ 00:09:43.080 10:48:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:43.080 * Looking for test storage... 00:09:43.080 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:43.080 10:48:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:43.080 10:48:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:43.080 10:48:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:43.080 10:48:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:43.080 10:48:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:43.080 10:48:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:43.080 10:48:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:43.080 10:48:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:43.080 10:48:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:43.080 10:48:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:43.080 10:48:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:43.080 10:48:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:43.080 10:48:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:09:43.080 10:48:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=bb4b8bd3-cfb4-4368-bf29-91254747069c 00:09:43.080 10:48:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:43.080 10:48:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:43.080 10:48:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:43.080 10:48:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:43.081 10:48:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:43.081 10:48:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:43.081 10:48:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:43.081 10:48:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:43.081 10:48:12 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.081 10:48:12 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.081 10:48:12 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.081 10:48:12 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:43.081 10:48:12 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.081 10:48:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:09:43.081 10:48:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:43.081 10:48:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:43.081 10:48:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:43.081 10:48:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:43.081 10:48:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:43.081 10:48:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:43.081 10:48:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:43.081 10:48:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:43.081 10:48:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:43.081 10:48:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:43.081 10:48:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:09:43.081 10:48:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:43.081 10:48:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:43.081 10:48:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:43.081 10:48:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:43.081 ************************************ 00:09:43.081 START TEST nvmf_auth_target 00:09:43.081 ************************************ 00:09:43.081 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:43.340 * Looking for test storage... 00:09:43.340 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:43.340 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:43.340 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:09:43.340 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:43.340 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:43.340 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:43.340 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:43.340 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:43.340 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:43.340 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:43.340 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:43.340 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:43.340 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:43.340 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=bb4b8bd3-cfb4-4368-bf29-91254747069c 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:43.341 Cannot find device "nvmf_tgt_br" 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:43.341 Cannot find device "nvmf_tgt_br2" 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:43.341 Cannot find device "nvmf_tgt_br" 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:43.341 Cannot find device "nvmf_tgt_br2" 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:09:43.341 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:43.341 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:43.341 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:43.341 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:43.341 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:09:43.341 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:43.341 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:43.341 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:09:43.341 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:43.341 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:43.341 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:43.341 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:43.601 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:43.601 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:43.601 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:43.601 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:43.601 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:43.601 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:43.601 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:43.601 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:43.601 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:43.601 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:43.601 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:43.601 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:43.601 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:43.601 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:43.601 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:43.601 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:43.601 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:43.601 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:43.601 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:43.601 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:43.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:43.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:09:43.601 00:09:43.601 --- 10.0.0.2 ping statistics --- 00:09:43.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.601 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:09:43.601 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:43.601 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:43.601 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:09:43.601 00:09:43.601 --- 10.0.0.3 ping statistics --- 00:09:43.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.601 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:09:43.601 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:43.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:43.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:09:43.601 00:09:43.601 --- 10.0.0.1 ping statistics --- 00:09:43.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.601 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:09:43.601 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:43.601 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:09:43.601 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:43.601 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:43.601 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:43.601 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:43.601 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:43.601 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:43.601 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:43.601 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:09:43.601 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:43.601 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:43.601 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:43.601 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=68612 00:09:43.601 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 68612 00:09:43.601 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:09:43.601 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 68612 ']' 00:09:43.601 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.601 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:43.601 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.601 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:43.601 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=68650 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a8986199db56bce49b72735c3f4e5cb496add606cf571afd 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.BDB 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a8986199db56bce49b72735c3f4e5cb496add606cf571afd 0 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a8986199db56bce49b72735c3f4e5cb496add606cf571afd 0 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a8986199db56bce49b72735c3f4e5cb496add606cf571afd 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.BDB 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.BDB 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.BDB 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1ba3ac44b8d1f53a4899d342c921a07e6c453fdedd113a4bfc0277f3a584e007 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.z1r 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1ba3ac44b8d1f53a4899d342c921a07e6c453fdedd113a4bfc0277f3a584e007 3 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1ba3ac44b8d1f53a4899d342c921a07e6c453fdedd113a4bfc0277f3a584e007 3 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1ba3ac44b8d1f53a4899d342c921a07e6c453fdedd113a4bfc0277f3a584e007 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.z1r 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.z1r 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.z1r 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8e713ed5c04751e63f9d5f36d235648d 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.YLz 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8e713ed5c04751e63f9d5f36d235648d 1 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8e713ed5c04751e63f9d5f36d235648d 1 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8e713ed5c04751e63f9d5f36d235648d 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.YLz 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.YLz 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.YLz 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7e0acfe884e0c4858221cb653ccd543078f6faab1d67f8ce 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.1GK 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7e0acfe884e0c4858221cb653ccd543078f6faab1d67f8ce 2 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7e0acfe884e0c4858221cb653ccd543078f6faab1d67f8ce 2 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7e0acfe884e0c4858221cb653ccd543078f6faab1d67f8ce 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.1GK 00:09:44.981 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.1GK 00:09:44.982 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.1GK 00:09:44.982 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:09:44.982 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:44.982 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:44.982 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:44.982 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:09:44.982 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:09:44.982 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:44.982 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=fff58e425f723a50a0a7daac3966fd4575a294138ae70914 00:09:44.982 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:09:44.982 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Bd9 00:09:44.982 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key fff58e425f723a50a0a7daac3966fd4575a294138ae70914 2 00:09:44.982 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 fff58e425f723a50a0a7daac3966fd4575a294138ae70914 2 00:09:44.982 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:44.982 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:44.982 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=fff58e425f723a50a0a7daac3966fd4575a294138ae70914 00:09:44.982 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:09:44.982 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Bd9 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Bd9 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.Bd9 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=fbb8c440cc4de442ee0f2f5ce02cfe8c 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.jwQ 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key fbb8c440cc4de442ee0f2f5ce02cfe8c 1 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 fbb8c440cc4de442ee0f2f5ce02cfe8c 1 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=fbb8c440cc4de442ee0f2f5ce02cfe8c 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.jwQ 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.jwQ 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.jwQ 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=54c789ee98d5d4babc30bdda88738bccfc83f13506bab6b63afc3a214876288d 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Bun 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 54c789ee98d5d4babc30bdda88738bccfc83f13506bab6b63afc3a214876288d 3 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 54c789ee98d5d4babc30bdda88738bccfc83f13506bab6b63afc3a214876288d 3 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=54c789ee98d5d4babc30bdda88738bccfc83f13506bab6b63afc3a214876288d 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Bun 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Bun 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.Bun 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 68612 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 68612 ']' 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:45.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:45.241 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.500 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:45.500 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:09:45.500 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 68650 /var/tmp/host.sock 00:09:45.500 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 68650 ']' 00:09:45.500 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:09:45.500 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:45.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:45.500 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:45.500 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:45.500 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.758 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:45.758 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:09:45.758 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:09:45.758 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.758 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.758 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.758 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:09:45.758 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.BDB 00:09:45.758 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.758 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.758 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.758 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.BDB 00:09:45.758 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.BDB 00:09:46.017 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.z1r ]] 00:09:46.017 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.z1r 00:09:46.017 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.017 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.017 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.017 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.z1r 00:09:46.017 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.z1r 00:09:46.275 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:09:46.275 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.YLz 00:09:46.275 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.275 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.275 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.275 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.YLz 00:09:46.276 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.YLz 00:09:46.534 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.1GK ]] 00:09:46.534 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1GK 00:09:46.534 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.534 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.534 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.534 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1GK 00:09:46.534 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1GK 00:09:46.791 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:09:46.791 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Bd9 00:09:46.791 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.791 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.791 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.791 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Bd9 00:09:46.791 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Bd9 00:09:47.049 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.jwQ ]] 00:09:47.049 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.jwQ 00:09:47.049 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.049 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:47.049 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.049 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.jwQ 00:09:47.049 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.jwQ 00:09:47.307 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:09:47.307 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Bun 00:09:47.307 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.307 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:47.307 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.307 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Bun 00:09:47.307 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Bun 00:09:47.565 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:09:47.565 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:09:47.565 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:09:47.565 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:47.565 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:47.565 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:47.822 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:09:47.822 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:47.822 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:47.822 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:09:47.822 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:09:47.822 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:47.822 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:47.822 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.822 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:47.822 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.822 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:47.822 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:48.080 00:09:48.080 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:48.080 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:48.080 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:09:48.353 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:48.353 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:48.353 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.353 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.353 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.353 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:09:48.353 { 00:09:48.353 "cntlid": 1, 00:09:48.353 "qid": 0, 00:09:48.353 "state": "enabled", 00:09:48.353 "thread": "nvmf_tgt_poll_group_000", 00:09:48.353 "listen_address": { 00:09:48.353 "trtype": "TCP", 00:09:48.353 "adrfam": "IPv4", 00:09:48.353 "traddr": "10.0.0.2", 00:09:48.353 "trsvcid": "4420" 00:09:48.353 }, 00:09:48.353 "peer_address": { 00:09:48.353 "trtype": "TCP", 00:09:48.353 "adrfam": "IPv4", 00:09:48.353 "traddr": "10.0.0.1", 00:09:48.353 "trsvcid": "43636" 00:09:48.353 }, 00:09:48.353 "auth": { 00:09:48.353 "state": "completed", 00:09:48.353 "digest": "sha256", 00:09:48.353 "dhgroup": "null" 00:09:48.353 } 00:09:48.353 } 00:09:48.353 ]' 00:09:48.353 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:09:48.610 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:48.610 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:09:48.610 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:09:48.610 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:09:48.610 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:48.610 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:48.610 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:48.868 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:00:YTg5ODYxOTlkYjU2YmNlNDliNzI3MzVjM2Y0ZTVjYjQ5NmFkZDYwNmNmNTcxYWZkqfnZcw==: --dhchap-ctrl-secret DHHC-1:03:MWJhM2FjNDRiOGQxZjUzYTQ4OTlkMzQyYzkyMWEwN2U2YzQ1M2ZkZWRkMTEzYTRiZmMwMjc3ZjNhNTg0ZTAwN+fY0rE=: 00:09:54.128 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:54.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:54.128 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:09:54.128 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.128 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:54.128 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.128 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:54.128 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:54.128 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:54.128 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:09:54.128 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:54.128 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:54.128 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:09:54.128 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:09:54.128 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:54.128 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:54.128 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.128 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:54.128 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.128 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:54.128 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:54.128 00:09:54.128 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:09:54.128 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:54.128 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:54.128 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:54.128 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:54.128 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.128 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:54.128 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.128 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:09:54.128 { 00:09:54.128 "cntlid": 3, 00:09:54.128 "qid": 0, 00:09:54.128 "state": "enabled", 00:09:54.128 "thread": "nvmf_tgt_poll_group_000", 00:09:54.128 "listen_address": { 00:09:54.128 "trtype": "TCP", 00:09:54.128 "adrfam": "IPv4", 00:09:54.128 "traddr": "10.0.0.2", 00:09:54.128 "trsvcid": "4420" 00:09:54.128 }, 00:09:54.128 "peer_address": { 00:09:54.129 "trtype": "TCP", 00:09:54.129 "adrfam": "IPv4", 00:09:54.129 "traddr": "10.0.0.1", 00:09:54.129 "trsvcid": "48676" 00:09:54.129 }, 00:09:54.129 "auth": { 00:09:54.129 "state": "completed", 00:09:54.129 "digest": "sha256", 00:09:54.129 "dhgroup": "null" 00:09:54.129 } 00:09:54.129 } 00:09:54.129 ]' 00:09:54.129 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:09:54.129 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:54.129 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:09:54.129 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:09:54.129 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:09:54.129 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:54.129 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:54.129 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:54.387 10:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:01:OGU3MTNlZDVjMDQ3NTFlNjNmOWQ1ZjM2ZDIzNTY0OGShwSVJ: --dhchap-ctrl-secret DHHC-1:02:N2UwYWNmZTg4NGUwYzQ4NTgyMjFjYjY1M2NjZDU0MzA3OGY2ZmFhYjFkNjdmOGNls9s4JA==: 00:09:55.321 10:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:55.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:55.321 10:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:09:55.321 10:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.321 10:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:55.321 10:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.321 10:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:55.321 10:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:55.321 10:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:55.321 10:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:09:55.321 10:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:55.321 10:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:55.321 10:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:09:55.321 10:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:09:55.321 10:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:55.321 10:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:55.321 10:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.321 10:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:55.321 10:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.321 10:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:55.321 10:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:55.888 00:09:55.888 10:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:55.888 10:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:55.888 10:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:09:56.146 10:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:56.146 10:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:56.146 10:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.146 10:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:56.146 10:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.146 10:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:09:56.146 { 00:09:56.146 "cntlid": 5, 00:09:56.146 "qid": 0, 00:09:56.146 "state": "enabled", 00:09:56.146 "thread": "nvmf_tgt_poll_group_000", 00:09:56.146 "listen_address": { 00:09:56.146 "trtype": "TCP", 00:09:56.146 "adrfam": "IPv4", 00:09:56.146 "traddr": "10.0.0.2", 00:09:56.146 "trsvcid": "4420" 00:09:56.146 }, 00:09:56.146 "peer_address": { 00:09:56.146 "trtype": "TCP", 00:09:56.146 "adrfam": "IPv4", 00:09:56.146 "traddr": "10.0.0.1", 00:09:56.146 "trsvcid": "48706" 00:09:56.146 }, 00:09:56.146 "auth": { 00:09:56.146 "state": "completed", 00:09:56.146 "digest": "sha256", 00:09:56.146 "dhgroup": "null" 00:09:56.146 } 00:09:56.146 } 00:09:56.146 ]' 00:09:56.146 10:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:09:56.146 10:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:56.146 10:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:09:56.146 10:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:09:56.146 10:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:09:56.146 10:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:56.146 10:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:56.146 10:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:56.404 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:02:ZmZmNThlNDI1ZjcyM2E1MGEwYTdkYWFjMzk2NmZkNDU3NWEyOTQxMzhhZTcwOTE0jsZ/MQ==: --dhchap-ctrl-secret DHHC-1:01:ZmJiOGM0NDBjYzRkZTQ0MmVlMGYyZjVjZTAyY2ZlOGP2BXJQ: 00:09:57.338 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:57.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:57.338 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:09:57.338 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.338 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.338 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.338 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:57.338 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:57.338 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:57.338 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:09:57.338 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:57.338 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:57.338 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:09:57.338 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:09:57.338 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:57.338 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key3 00:09:57.338 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.338 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.338 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.338 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:09:57.338 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:09:57.904 00:09:57.904 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:57.904 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:57.904 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:09:58.162 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:58.162 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:58.162 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.162 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.162 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.162 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:09:58.162 { 00:09:58.162 "cntlid": 7, 00:09:58.162 "qid": 0, 00:09:58.162 "state": "enabled", 00:09:58.162 "thread": "nvmf_tgt_poll_group_000", 00:09:58.163 "listen_address": { 00:09:58.163 "trtype": "TCP", 00:09:58.163 "adrfam": "IPv4", 00:09:58.163 "traddr": "10.0.0.2", 00:09:58.163 "trsvcid": "4420" 00:09:58.163 }, 00:09:58.163 "peer_address": { 00:09:58.163 "trtype": "TCP", 00:09:58.163 "adrfam": "IPv4", 00:09:58.163 "traddr": "10.0.0.1", 00:09:58.163 "trsvcid": "48740" 00:09:58.163 }, 00:09:58.163 "auth": { 00:09:58.163 "state": "completed", 00:09:58.163 "digest": "sha256", 00:09:58.163 "dhgroup": "null" 00:09:58.163 } 00:09:58.163 } 00:09:58.163 ]' 00:09:58.163 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:09:58.163 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:58.163 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:09:58.163 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:09:58.163 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:09:58.163 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:58.163 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:58.163 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:58.425 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:03:NTRjNzg5ZWU5OGQ1ZDRiYWJjMzBiZGRhODg3MzhiY2NmYzgzZjEzNTA2YmFiNmI2M2FmYzNhMjE0ODc2Mjg4ZB3NM1A=: 00:09:59.369 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:59.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:59.369 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:09:59.369 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.369 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.369 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.369 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:09:59.369 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:59.369 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:59.369 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:59.369 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:09:59.369 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:59.369 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:59.369 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:09:59.369 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:09:59.369 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:59.369 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:59.369 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.369 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.369 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.369 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:59.369 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:59.934 00:09:59.934 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:59.934 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:09:59.934 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:00.191 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:00.191 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:00.191 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.191 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.191 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.191 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:00.191 { 00:10:00.191 "cntlid": 9, 00:10:00.191 "qid": 0, 00:10:00.191 "state": "enabled", 00:10:00.191 "thread": "nvmf_tgt_poll_group_000", 00:10:00.191 "listen_address": { 00:10:00.191 "trtype": "TCP", 00:10:00.191 "adrfam": "IPv4", 00:10:00.191 "traddr": "10.0.0.2", 00:10:00.191 "trsvcid": "4420" 00:10:00.191 }, 00:10:00.191 "peer_address": { 00:10:00.191 "trtype": "TCP", 00:10:00.191 "adrfam": "IPv4", 00:10:00.191 "traddr": "10.0.0.1", 00:10:00.191 "trsvcid": "33778" 00:10:00.191 }, 00:10:00.191 "auth": { 00:10:00.191 "state": "completed", 00:10:00.191 "digest": "sha256", 00:10:00.191 "dhgroup": "ffdhe2048" 00:10:00.191 } 00:10:00.191 } 00:10:00.191 ]' 00:10:00.191 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:00.191 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:00.191 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:00.191 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:00.191 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:00.191 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:00.191 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:00.191 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:00.449 10:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:00:YTg5ODYxOTlkYjU2YmNlNDliNzI3MzVjM2Y0ZTVjYjQ5NmFkZDYwNmNmNTcxYWZkqfnZcw==: --dhchap-ctrl-secret DHHC-1:03:MWJhM2FjNDRiOGQxZjUzYTQ4OTlkMzQyYzkyMWEwN2U2YzQ1M2ZkZWRkMTEzYTRiZmMwMjc3ZjNhNTg0ZTAwN+fY0rE=: 00:10:01.381 10:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:01.381 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:01.381 10:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:10:01.381 10:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.381 10:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.381 10:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.381 10:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:01.381 10:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:01.381 10:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:01.381 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:10:01.381 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:01.381 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:01.381 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:01.381 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:01.381 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:01.381 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:01.381 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.381 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.381 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.381 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:01.381 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:01.638 00:10:01.638 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:01.638 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:01.638 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:02.203 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:02.203 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:02.203 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.203 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.203 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.203 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:02.203 { 00:10:02.203 "cntlid": 11, 00:10:02.203 "qid": 0, 00:10:02.203 "state": "enabled", 00:10:02.203 "thread": "nvmf_tgt_poll_group_000", 00:10:02.203 "listen_address": { 00:10:02.203 "trtype": "TCP", 00:10:02.203 "adrfam": "IPv4", 00:10:02.203 "traddr": "10.0.0.2", 00:10:02.203 "trsvcid": "4420" 00:10:02.203 }, 00:10:02.203 "peer_address": { 00:10:02.203 "trtype": "TCP", 00:10:02.203 "adrfam": "IPv4", 00:10:02.203 "traddr": "10.0.0.1", 00:10:02.203 "trsvcid": "33812" 00:10:02.203 }, 00:10:02.203 "auth": { 00:10:02.203 "state": "completed", 00:10:02.203 "digest": "sha256", 00:10:02.203 "dhgroup": "ffdhe2048" 00:10:02.203 } 00:10:02.203 } 00:10:02.203 ]' 00:10:02.203 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:02.203 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:02.203 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:02.203 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:02.203 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:02.203 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:02.203 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:02.203 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:02.460 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:01:OGU3MTNlZDVjMDQ3NTFlNjNmOWQ1ZjM2ZDIzNTY0OGShwSVJ: --dhchap-ctrl-secret DHHC-1:02:N2UwYWNmZTg4NGUwYzQ4NTgyMjFjYjY1M2NjZDU0MzA3OGY2ZmFhYjFkNjdmOGNls9s4JA==: 00:10:03.026 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:03.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:03.026 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:10:03.026 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.026 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.026 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.026 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:03.026 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:03.026 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:03.285 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:10:03.285 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:03.285 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:03.285 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:03.285 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:03.285 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:03.285 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:03.285 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.285 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.285 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.285 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:03.285 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:03.850 00:10:03.850 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:03.850 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:03.850 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:04.107 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:04.107 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:04.107 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.108 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.108 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.108 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:04.108 { 00:10:04.108 "cntlid": 13, 00:10:04.108 "qid": 0, 00:10:04.108 "state": "enabled", 00:10:04.108 "thread": "nvmf_tgt_poll_group_000", 00:10:04.108 "listen_address": { 00:10:04.108 "trtype": "TCP", 00:10:04.108 "adrfam": "IPv4", 00:10:04.108 "traddr": "10.0.0.2", 00:10:04.108 "trsvcid": "4420" 00:10:04.108 }, 00:10:04.108 "peer_address": { 00:10:04.108 "trtype": "TCP", 00:10:04.108 "adrfam": "IPv4", 00:10:04.108 "traddr": "10.0.0.1", 00:10:04.108 "trsvcid": "33828" 00:10:04.108 }, 00:10:04.108 "auth": { 00:10:04.108 "state": "completed", 00:10:04.108 "digest": "sha256", 00:10:04.108 "dhgroup": "ffdhe2048" 00:10:04.108 } 00:10:04.108 } 00:10:04.108 ]' 00:10:04.108 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:04.108 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:04.108 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:04.108 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:04.108 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:04.108 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:04.108 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:04.108 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:04.366 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:02:ZmZmNThlNDI1ZjcyM2E1MGEwYTdkYWFjMzk2NmZkNDU3NWEyOTQxMzhhZTcwOTE0jsZ/MQ==: --dhchap-ctrl-secret DHHC-1:01:ZmJiOGM0NDBjYzRkZTQ0MmVlMGYyZjVjZTAyY2ZlOGP2BXJQ: 00:10:05.301 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:05.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:05.301 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:10:05.302 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.302 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.302 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.302 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:05.302 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:05.302 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:05.302 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:10:05.302 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:05.302 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:05.302 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:05.302 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:05.302 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:05.302 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key3 00:10:05.302 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.302 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.302 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.302 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:05.302 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:05.561 00:10:05.820 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:05.820 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:05.820 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:06.078 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:06.078 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:06.078 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.078 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.078 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.078 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:06.078 { 00:10:06.078 "cntlid": 15, 00:10:06.078 "qid": 0, 00:10:06.078 "state": "enabled", 00:10:06.078 "thread": "nvmf_tgt_poll_group_000", 00:10:06.078 "listen_address": { 00:10:06.078 "trtype": "TCP", 00:10:06.078 "adrfam": "IPv4", 00:10:06.078 "traddr": "10.0.0.2", 00:10:06.078 "trsvcid": "4420" 00:10:06.078 }, 00:10:06.078 "peer_address": { 00:10:06.078 "trtype": "TCP", 00:10:06.078 "adrfam": "IPv4", 00:10:06.078 "traddr": "10.0.0.1", 00:10:06.078 "trsvcid": "33852" 00:10:06.078 }, 00:10:06.078 "auth": { 00:10:06.078 "state": "completed", 00:10:06.078 "digest": "sha256", 00:10:06.078 "dhgroup": "ffdhe2048" 00:10:06.078 } 00:10:06.078 } 00:10:06.078 ]' 00:10:06.078 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:06.078 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:06.078 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:06.078 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:06.078 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:06.078 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:06.078 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:06.078 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:06.337 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:03:NTRjNzg5ZWU5OGQ1ZDRiYWJjMzBiZGRhODg3MzhiY2NmYzgzZjEzNTA2YmFiNmI2M2FmYzNhMjE0ODc2Mjg4ZB3NM1A=: 00:10:07.272 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:07.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:07.272 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:10:07.272 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.272 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.272 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.272 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:07.272 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:07.272 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:07.272 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:07.530 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:10:07.530 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:07.530 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:07.530 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:07.530 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:07.530 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:07.530 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:07.530 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.530 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.530 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.530 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:07.530 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:07.788 00:10:07.788 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:07.788 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:07.788 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:08.047 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:08.047 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:08.047 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.047 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.047 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.047 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:08.047 { 00:10:08.047 "cntlid": 17, 00:10:08.047 "qid": 0, 00:10:08.047 "state": "enabled", 00:10:08.047 "thread": "nvmf_tgt_poll_group_000", 00:10:08.047 "listen_address": { 00:10:08.047 "trtype": "TCP", 00:10:08.047 "adrfam": "IPv4", 00:10:08.047 "traddr": "10.0.0.2", 00:10:08.047 "trsvcid": "4420" 00:10:08.047 }, 00:10:08.047 "peer_address": { 00:10:08.047 "trtype": "TCP", 00:10:08.047 "adrfam": "IPv4", 00:10:08.047 "traddr": "10.0.0.1", 00:10:08.047 "trsvcid": "33882" 00:10:08.047 }, 00:10:08.047 "auth": { 00:10:08.047 "state": "completed", 00:10:08.047 "digest": "sha256", 00:10:08.047 "dhgroup": "ffdhe3072" 00:10:08.047 } 00:10:08.047 } 00:10:08.047 ]' 00:10:08.047 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:08.305 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:08.305 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:08.305 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:08.305 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:08.305 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:08.305 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:08.305 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:08.564 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:00:YTg5ODYxOTlkYjU2YmNlNDliNzI3MzVjM2Y0ZTVjYjQ5NmFkZDYwNmNmNTcxYWZkqfnZcw==: --dhchap-ctrl-secret DHHC-1:03:MWJhM2FjNDRiOGQxZjUzYTQ4OTlkMzQyYzkyMWEwN2U2YzQ1M2ZkZWRkMTEzYTRiZmMwMjc3ZjNhNTg0ZTAwN+fY0rE=: 00:10:09.500 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:09.500 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:09.500 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:10:09.500 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.500 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.500 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.500 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:09.500 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:09.500 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:09.500 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:10:09.500 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:09.500 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:09.500 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:09.500 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:09.500 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:09.500 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:09.500 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.500 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.759 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.759 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:09.759 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:10.018 00:10:10.018 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:10.018 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:10.018 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:10.276 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:10.276 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:10.276 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.276 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.276 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.276 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:10.276 { 00:10:10.276 "cntlid": 19, 00:10:10.276 "qid": 0, 00:10:10.276 "state": "enabled", 00:10:10.276 "thread": "nvmf_tgt_poll_group_000", 00:10:10.276 "listen_address": { 00:10:10.276 "trtype": "TCP", 00:10:10.276 "adrfam": "IPv4", 00:10:10.276 "traddr": "10.0.0.2", 00:10:10.276 "trsvcid": "4420" 00:10:10.276 }, 00:10:10.276 "peer_address": { 00:10:10.276 "trtype": "TCP", 00:10:10.276 "adrfam": "IPv4", 00:10:10.276 "traddr": "10.0.0.1", 00:10:10.276 "trsvcid": "48944" 00:10:10.276 }, 00:10:10.276 "auth": { 00:10:10.276 "state": "completed", 00:10:10.276 "digest": "sha256", 00:10:10.276 "dhgroup": "ffdhe3072" 00:10:10.276 } 00:10:10.276 } 00:10:10.276 ]' 00:10:10.276 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:10.276 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:10.276 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:10.535 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:10.535 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:10.535 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:10.535 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:10.535 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:10.793 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:01:OGU3MTNlZDVjMDQ3NTFlNjNmOWQ1ZjM2ZDIzNTY0OGShwSVJ: --dhchap-ctrl-secret DHHC-1:02:N2UwYWNmZTg4NGUwYzQ4NTgyMjFjYjY1M2NjZDU0MzA3OGY2ZmFhYjFkNjdmOGNls9s4JA==: 00:10:11.360 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:11.360 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:11.360 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:10:11.360 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.360 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.360 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.360 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:11.360 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:11.360 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:11.964 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:10:11.964 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:11.964 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:11.964 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:11.964 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:11.964 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:11.964 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:11.964 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.964 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.964 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.964 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:11.964 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:12.222 00:10:12.222 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:12.222 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:12.222 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:12.481 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:12.481 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:12.481 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.481 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.481 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.481 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:12.481 { 00:10:12.481 "cntlid": 21, 00:10:12.481 "qid": 0, 00:10:12.481 "state": "enabled", 00:10:12.481 "thread": "nvmf_tgt_poll_group_000", 00:10:12.481 "listen_address": { 00:10:12.481 "trtype": "TCP", 00:10:12.481 "adrfam": "IPv4", 00:10:12.481 "traddr": "10.0.0.2", 00:10:12.481 "trsvcid": "4420" 00:10:12.481 }, 00:10:12.481 "peer_address": { 00:10:12.481 "trtype": "TCP", 00:10:12.481 "adrfam": "IPv4", 00:10:12.481 "traddr": "10.0.0.1", 00:10:12.481 "trsvcid": "48972" 00:10:12.481 }, 00:10:12.481 "auth": { 00:10:12.481 "state": "completed", 00:10:12.481 "digest": "sha256", 00:10:12.481 "dhgroup": "ffdhe3072" 00:10:12.481 } 00:10:12.481 } 00:10:12.481 ]' 00:10:12.481 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:12.481 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:12.481 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:12.481 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:12.481 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:12.481 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:12.481 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:12.481 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:12.740 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:02:ZmZmNThlNDI1ZjcyM2E1MGEwYTdkYWFjMzk2NmZkNDU3NWEyOTQxMzhhZTcwOTE0jsZ/MQ==: --dhchap-ctrl-secret DHHC-1:01:ZmJiOGM0NDBjYzRkZTQ0MmVlMGYyZjVjZTAyY2ZlOGP2BXJQ: 00:10:13.677 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:13.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:13.677 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:10:13.677 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.677 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.677 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.677 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:13.677 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:13.677 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:13.935 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:10:13.935 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:13.935 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:13.935 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:13.935 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:13.935 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:13.935 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key3 00:10:13.935 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.935 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.935 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.935 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:13.935 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:14.193 00:10:14.193 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:14.193 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:14.193 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:14.451 10:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:14.451 10:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:14.451 10:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.451 10:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.451 10:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.451 10:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:14.451 { 00:10:14.451 "cntlid": 23, 00:10:14.451 "qid": 0, 00:10:14.451 "state": "enabled", 00:10:14.451 "thread": "nvmf_tgt_poll_group_000", 00:10:14.451 "listen_address": { 00:10:14.451 "trtype": "TCP", 00:10:14.451 "adrfam": "IPv4", 00:10:14.451 "traddr": "10.0.0.2", 00:10:14.451 "trsvcid": "4420" 00:10:14.451 }, 00:10:14.451 "peer_address": { 00:10:14.451 "trtype": "TCP", 00:10:14.451 "adrfam": "IPv4", 00:10:14.451 "traddr": "10.0.0.1", 00:10:14.451 "trsvcid": "48996" 00:10:14.451 }, 00:10:14.451 "auth": { 00:10:14.451 "state": "completed", 00:10:14.451 "digest": "sha256", 00:10:14.451 "dhgroup": "ffdhe3072" 00:10:14.451 } 00:10:14.451 } 00:10:14.451 ]' 00:10:14.451 10:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:14.451 10:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:14.451 10:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:14.451 10:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:14.451 10:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:14.709 10:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:14.709 10:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:14.709 10:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:14.967 10:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:03:NTRjNzg5ZWU5OGQ1ZDRiYWJjMzBiZGRhODg3MzhiY2NmYzgzZjEzNTA2YmFiNmI2M2FmYzNhMjE0ODc2Mjg4ZB3NM1A=: 00:10:15.535 10:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:15.535 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:15.535 10:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:10:15.535 10:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.535 10:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.535 10:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.535 10:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:15.535 10:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:15.535 10:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:15.535 10:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:15.794 10:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:10:15.794 10:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:15.794 10:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:15.794 10:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:15.794 10:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:15.794 10:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:15.794 10:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:15.794 10:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.794 10:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.794 10:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.795 10:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:15.795 10:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:16.362 00:10:16.362 10:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:16.362 10:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:16.362 10:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:16.628 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:16.628 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:16.628 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.628 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.628 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.628 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:16.628 { 00:10:16.628 "cntlid": 25, 00:10:16.628 "qid": 0, 00:10:16.628 "state": "enabled", 00:10:16.628 "thread": "nvmf_tgt_poll_group_000", 00:10:16.628 "listen_address": { 00:10:16.628 "trtype": "TCP", 00:10:16.628 "adrfam": "IPv4", 00:10:16.628 "traddr": "10.0.0.2", 00:10:16.628 "trsvcid": "4420" 00:10:16.628 }, 00:10:16.628 "peer_address": { 00:10:16.628 "trtype": "TCP", 00:10:16.628 "adrfam": "IPv4", 00:10:16.628 "traddr": "10.0.0.1", 00:10:16.628 "trsvcid": "49024" 00:10:16.628 }, 00:10:16.628 "auth": { 00:10:16.628 "state": "completed", 00:10:16.628 "digest": "sha256", 00:10:16.628 "dhgroup": "ffdhe4096" 00:10:16.628 } 00:10:16.628 } 00:10:16.628 ]' 00:10:16.628 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:16.628 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:16.628 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:16.628 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:16.628 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:16.628 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:16.628 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:16.628 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:16.887 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:00:YTg5ODYxOTlkYjU2YmNlNDliNzI3MzVjM2Y0ZTVjYjQ5NmFkZDYwNmNmNTcxYWZkqfnZcw==: --dhchap-ctrl-secret DHHC-1:03:MWJhM2FjNDRiOGQxZjUzYTQ4OTlkMzQyYzkyMWEwN2U2YzQ1M2ZkZWRkMTEzYTRiZmMwMjc3ZjNhNTg0ZTAwN+fY0rE=: 00:10:17.822 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:17.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:17.822 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:10:17.822 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.822 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.822 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.822 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:17.822 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:17.822 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:17.822 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:10:17.822 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:17.822 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:17.822 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:17.822 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:17.822 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:17.822 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:17.822 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.822 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.822 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.822 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:17.822 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:18.389 00:10:18.389 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:18.389 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:18.389 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:18.648 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:18.648 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:18.648 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.648 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.648 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.648 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:18.648 { 00:10:18.648 "cntlid": 27, 00:10:18.648 "qid": 0, 00:10:18.648 "state": "enabled", 00:10:18.648 "thread": "nvmf_tgt_poll_group_000", 00:10:18.648 "listen_address": { 00:10:18.648 "trtype": "TCP", 00:10:18.648 "adrfam": "IPv4", 00:10:18.648 "traddr": "10.0.0.2", 00:10:18.648 "trsvcid": "4420" 00:10:18.648 }, 00:10:18.648 "peer_address": { 00:10:18.648 "trtype": "TCP", 00:10:18.648 "adrfam": "IPv4", 00:10:18.648 "traddr": "10.0.0.1", 00:10:18.648 "trsvcid": "57346" 00:10:18.648 }, 00:10:18.648 "auth": { 00:10:18.648 "state": "completed", 00:10:18.648 "digest": "sha256", 00:10:18.648 "dhgroup": "ffdhe4096" 00:10:18.648 } 00:10:18.648 } 00:10:18.648 ]' 00:10:18.648 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:18.648 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:18.648 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:18.648 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:18.648 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:18.648 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:18.648 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:18.648 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:18.907 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:01:OGU3MTNlZDVjMDQ3NTFlNjNmOWQ1ZjM2ZDIzNTY0OGShwSVJ: --dhchap-ctrl-secret DHHC-1:02:N2UwYWNmZTg4NGUwYzQ4NTgyMjFjYjY1M2NjZDU0MzA3OGY2ZmFhYjFkNjdmOGNls9s4JA==: 00:10:19.841 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:19.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:19.841 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:10:19.841 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.841 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.842 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.842 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:19.842 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:19.842 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:20.100 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:10:20.100 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:20.100 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:20.100 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:20.100 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:20.100 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:20.100 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:20.100 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.100 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.100 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.100 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:20.100 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:20.359 00:10:20.359 10:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:20.359 10:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:20.359 10:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:20.617 10:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:20.617 10:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:20.617 10:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.617 10:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.617 10:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.617 10:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:20.617 { 00:10:20.617 "cntlid": 29, 00:10:20.617 "qid": 0, 00:10:20.617 "state": "enabled", 00:10:20.617 "thread": "nvmf_tgt_poll_group_000", 00:10:20.617 "listen_address": { 00:10:20.617 "trtype": "TCP", 00:10:20.617 "adrfam": "IPv4", 00:10:20.617 "traddr": "10.0.0.2", 00:10:20.617 "trsvcid": "4420" 00:10:20.617 }, 00:10:20.617 "peer_address": { 00:10:20.617 "trtype": "TCP", 00:10:20.617 "adrfam": "IPv4", 00:10:20.617 "traddr": "10.0.0.1", 00:10:20.617 "trsvcid": "57372" 00:10:20.617 }, 00:10:20.617 "auth": { 00:10:20.617 "state": "completed", 00:10:20.617 "digest": "sha256", 00:10:20.617 "dhgroup": "ffdhe4096" 00:10:20.617 } 00:10:20.617 } 00:10:20.617 ]' 00:10:20.617 10:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:20.617 10:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:20.617 10:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:20.876 10:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:20.876 10:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:20.876 10:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:20.876 10:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:20.876 10:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:21.134 10:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:02:ZmZmNThlNDI1ZjcyM2E1MGEwYTdkYWFjMzk2NmZkNDU3NWEyOTQxMzhhZTcwOTE0jsZ/MQ==: --dhchap-ctrl-secret DHHC-1:01:ZmJiOGM0NDBjYzRkZTQ0MmVlMGYyZjVjZTAyY2ZlOGP2BXJQ: 00:10:21.702 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:21.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:21.702 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:10:21.702 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.702 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.702 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.702 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:21.702 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:21.702 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:21.960 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:10:21.960 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:21.960 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:21.960 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:21.960 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:21.960 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:21.961 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key3 00:10:21.961 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.961 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.961 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.961 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:21.961 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:22.527 00:10:22.527 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:22.527 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:22.527 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:22.786 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:22.786 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:22.786 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.786 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.786 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.786 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:22.786 { 00:10:22.786 "cntlid": 31, 00:10:22.786 "qid": 0, 00:10:22.786 "state": "enabled", 00:10:22.786 "thread": "nvmf_tgt_poll_group_000", 00:10:22.786 "listen_address": { 00:10:22.786 "trtype": "TCP", 00:10:22.786 "adrfam": "IPv4", 00:10:22.786 "traddr": "10.0.0.2", 00:10:22.786 "trsvcid": "4420" 00:10:22.786 }, 00:10:22.786 "peer_address": { 00:10:22.786 "trtype": "TCP", 00:10:22.786 "adrfam": "IPv4", 00:10:22.786 "traddr": "10.0.0.1", 00:10:22.786 "trsvcid": "57392" 00:10:22.786 }, 00:10:22.786 "auth": { 00:10:22.786 "state": "completed", 00:10:22.786 "digest": "sha256", 00:10:22.786 "dhgroup": "ffdhe4096" 00:10:22.786 } 00:10:22.786 } 00:10:22.786 ]' 00:10:22.786 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:22.786 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:22.787 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:22.787 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:22.787 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:23.044 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:23.044 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:23.044 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:23.302 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:03:NTRjNzg5ZWU5OGQ1ZDRiYWJjMzBiZGRhODg3MzhiY2NmYzgzZjEzNTA2YmFiNmI2M2FmYzNhMjE0ODc2Mjg4ZB3NM1A=: 00:10:23.867 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:23.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:23.867 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:10:23.867 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.867 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.867 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.867 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:23.867 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:23.867 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:23.867 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:24.125 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:10:24.125 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:24.125 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:24.125 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:24.125 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:24.125 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:24.125 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:24.125 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.125 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.125 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.125 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:24.125 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:24.691 00:10:24.691 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:24.691 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:24.692 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:24.950 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:24.950 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:24.950 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.950 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.950 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.950 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:24.950 { 00:10:24.950 "cntlid": 33, 00:10:24.950 "qid": 0, 00:10:24.950 "state": "enabled", 00:10:24.950 "thread": "nvmf_tgt_poll_group_000", 00:10:24.950 "listen_address": { 00:10:24.950 "trtype": "TCP", 00:10:24.950 "adrfam": "IPv4", 00:10:24.950 "traddr": "10.0.0.2", 00:10:24.950 "trsvcid": "4420" 00:10:24.950 }, 00:10:24.950 "peer_address": { 00:10:24.950 "trtype": "TCP", 00:10:24.950 "adrfam": "IPv4", 00:10:24.950 "traddr": "10.0.0.1", 00:10:24.950 "trsvcid": "57440" 00:10:24.950 }, 00:10:24.950 "auth": { 00:10:24.950 "state": "completed", 00:10:24.950 "digest": "sha256", 00:10:24.950 "dhgroup": "ffdhe6144" 00:10:24.950 } 00:10:24.950 } 00:10:24.950 ]' 00:10:24.950 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:24.950 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:24.950 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:24.950 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:24.950 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:24.950 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:24.950 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:24.950 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:25.208 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:00:YTg5ODYxOTlkYjU2YmNlNDliNzI3MzVjM2Y0ZTVjYjQ5NmFkZDYwNmNmNTcxYWZkqfnZcw==: --dhchap-ctrl-secret DHHC-1:03:MWJhM2FjNDRiOGQxZjUzYTQ4OTlkMzQyYzkyMWEwN2U2YzQ1M2ZkZWRkMTEzYTRiZmMwMjc3ZjNhNTg0ZTAwN+fY0rE=: 00:10:26.169 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:26.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:26.169 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:10:26.169 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.169 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.169 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.169 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:26.169 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:26.169 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:26.469 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:10:26.469 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:26.469 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:26.469 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:26.469 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:26.469 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:26.469 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:26.469 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.469 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.469 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.469 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:26.469 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:26.728 00:10:26.728 10:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:26.728 10:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:26.728 10:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:27.295 10:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:27.295 10:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:27.295 10:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.295 10:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.295 10:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.295 10:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:27.295 { 00:10:27.295 "cntlid": 35, 00:10:27.295 "qid": 0, 00:10:27.295 "state": "enabled", 00:10:27.295 "thread": "nvmf_tgt_poll_group_000", 00:10:27.295 "listen_address": { 00:10:27.295 "trtype": "TCP", 00:10:27.295 "adrfam": "IPv4", 00:10:27.295 "traddr": "10.0.0.2", 00:10:27.295 "trsvcid": "4420" 00:10:27.295 }, 00:10:27.295 "peer_address": { 00:10:27.295 "trtype": "TCP", 00:10:27.295 "adrfam": "IPv4", 00:10:27.295 "traddr": "10.0.0.1", 00:10:27.295 "trsvcid": "57470" 00:10:27.295 }, 00:10:27.295 "auth": { 00:10:27.295 "state": "completed", 00:10:27.295 "digest": "sha256", 00:10:27.295 "dhgroup": "ffdhe6144" 00:10:27.295 } 00:10:27.295 } 00:10:27.295 ]' 00:10:27.295 10:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:27.295 10:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:27.295 10:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:27.295 10:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:27.295 10:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:27.295 10:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:27.295 10:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:27.296 10:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:27.553 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:01:OGU3MTNlZDVjMDQ3NTFlNjNmOWQ1ZjM2ZDIzNTY0OGShwSVJ: --dhchap-ctrl-secret DHHC-1:02:N2UwYWNmZTg4NGUwYzQ4NTgyMjFjYjY1M2NjZDU0MzA3OGY2ZmFhYjFkNjdmOGNls9s4JA==: 00:10:28.488 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:28.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:28.488 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:10:28.488 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.488 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.488 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.488 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:28.488 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:28.488 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:28.746 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:10:28.746 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:28.746 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:28.746 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:28.746 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:28.746 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:28.746 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:28.746 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.746 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.746 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.746 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:28.746 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:29.311 00:10:29.311 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:29.311 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:29.311 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:29.569 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:29.569 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:29.569 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.569 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.569 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.569 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:29.569 { 00:10:29.569 "cntlid": 37, 00:10:29.569 "qid": 0, 00:10:29.569 "state": "enabled", 00:10:29.569 "thread": "nvmf_tgt_poll_group_000", 00:10:29.569 "listen_address": { 00:10:29.569 "trtype": "TCP", 00:10:29.569 "adrfam": "IPv4", 00:10:29.569 "traddr": "10.0.0.2", 00:10:29.569 "trsvcid": "4420" 00:10:29.569 }, 00:10:29.569 "peer_address": { 00:10:29.569 "trtype": "TCP", 00:10:29.569 "adrfam": "IPv4", 00:10:29.569 "traddr": "10.0.0.1", 00:10:29.569 "trsvcid": "46842" 00:10:29.569 }, 00:10:29.569 "auth": { 00:10:29.569 "state": "completed", 00:10:29.569 "digest": "sha256", 00:10:29.569 "dhgroup": "ffdhe6144" 00:10:29.569 } 00:10:29.569 } 00:10:29.569 ]' 00:10:29.569 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:29.569 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:29.569 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:29.569 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:29.569 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:29.569 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:29.569 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:29.569 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:30.136 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:02:ZmZmNThlNDI1ZjcyM2E1MGEwYTdkYWFjMzk2NmZkNDU3NWEyOTQxMzhhZTcwOTE0jsZ/MQ==: --dhchap-ctrl-secret DHHC-1:01:ZmJiOGM0NDBjYzRkZTQ0MmVlMGYyZjVjZTAyY2ZlOGP2BXJQ: 00:10:30.703 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:30.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:30.703 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:10:30.703 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.703 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.703 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.703 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:30.703 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:30.703 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:30.960 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:10:30.960 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:30.960 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:30.960 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:30.960 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:30.960 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:30.960 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key3 00:10:30.960 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.960 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.960 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.960 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:30.960 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:31.530 00:10:31.530 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:31.530 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:31.530 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:32.094 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:32.094 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:32.094 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.094 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.094 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.094 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:32.094 { 00:10:32.094 "cntlid": 39, 00:10:32.094 "qid": 0, 00:10:32.094 "state": "enabled", 00:10:32.094 "thread": "nvmf_tgt_poll_group_000", 00:10:32.094 "listen_address": { 00:10:32.094 "trtype": "TCP", 00:10:32.094 "adrfam": "IPv4", 00:10:32.094 "traddr": "10.0.0.2", 00:10:32.094 "trsvcid": "4420" 00:10:32.094 }, 00:10:32.094 "peer_address": { 00:10:32.094 "trtype": "TCP", 00:10:32.094 "adrfam": "IPv4", 00:10:32.094 "traddr": "10.0.0.1", 00:10:32.094 "trsvcid": "46868" 00:10:32.094 }, 00:10:32.094 "auth": { 00:10:32.094 "state": "completed", 00:10:32.094 "digest": "sha256", 00:10:32.094 "dhgroup": "ffdhe6144" 00:10:32.094 } 00:10:32.094 } 00:10:32.094 ]' 00:10:32.094 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:32.094 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:32.094 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:32.094 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:32.094 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:32.094 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:32.094 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:32.095 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:32.352 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:03:NTRjNzg5ZWU5OGQ1ZDRiYWJjMzBiZGRhODg3MzhiY2NmYzgzZjEzNTA2YmFiNmI2M2FmYzNhMjE0ODc2Mjg4ZB3NM1A=: 00:10:33.731 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:33.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:33.731 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:10:33.731 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.731 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.731 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.731 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:33.731 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:33.731 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:33.731 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:33.731 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:10:33.731 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:33.731 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:33.731 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:33.731 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:33.731 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:33.731 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:33.731 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.731 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.731 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.732 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:33.732 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:34.665 00:10:34.665 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:34.665 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:34.665 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:34.923 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:34.923 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:34.923 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.923 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.923 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.923 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:34.923 { 00:10:34.923 "cntlid": 41, 00:10:34.923 "qid": 0, 00:10:34.923 "state": "enabled", 00:10:34.923 "thread": "nvmf_tgt_poll_group_000", 00:10:34.923 "listen_address": { 00:10:34.923 "trtype": "TCP", 00:10:34.923 "adrfam": "IPv4", 00:10:34.923 "traddr": "10.0.0.2", 00:10:34.923 "trsvcid": "4420" 00:10:34.923 }, 00:10:34.923 "peer_address": { 00:10:34.923 "trtype": "TCP", 00:10:34.923 "adrfam": "IPv4", 00:10:34.923 "traddr": "10.0.0.1", 00:10:34.923 "trsvcid": "46882" 00:10:34.923 }, 00:10:34.923 "auth": { 00:10:34.923 "state": "completed", 00:10:34.923 "digest": "sha256", 00:10:34.923 "dhgroup": "ffdhe8192" 00:10:34.923 } 00:10:34.923 } 00:10:34.923 ]' 00:10:34.923 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:34.923 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:34.923 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:34.923 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:34.923 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:35.180 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:35.180 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:35.180 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:35.438 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:00:YTg5ODYxOTlkYjU2YmNlNDliNzI3MzVjM2Y0ZTVjYjQ5NmFkZDYwNmNmNTcxYWZkqfnZcw==: --dhchap-ctrl-secret DHHC-1:03:MWJhM2FjNDRiOGQxZjUzYTQ4OTlkMzQyYzkyMWEwN2U2YzQ1M2ZkZWRkMTEzYTRiZmMwMjc3ZjNhNTg0ZTAwN+fY0rE=: 00:10:36.387 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:36.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:36.387 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:10:36.387 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.387 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.387 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.387 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:36.387 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:36.387 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:36.654 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:10:36.654 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:36.654 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:36.654 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:36.654 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:36.654 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:36.654 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:36.654 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.654 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.654 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.654 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:36.654 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:37.589 00:10:37.589 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:37.589 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:37.589 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:37.589 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:37.589 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:37.589 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.589 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.589 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.589 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:37.589 { 00:10:37.589 "cntlid": 43, 00:10:37.589 "qid": 0, 00:10:37.589 "state": "enabled", 00:10:37.589 "thread": "nvmf_tgt_poll_group_000", 00:10:37.589 "listen_address": { 00:10:37.589 "trtype": "TCP", 00:10:37.589 "adrfam": "IPv4", 00:10:37.589 "traddr": "10.0.0.2", 00:10:37.589 "trsvcid": "4420" 00:10:37.589 }, 00:10:37.589 "peer_address": { 00:10:37.589 "trtype": "TCP", 00:10:37.589 "adrfam": "IPv4", 00:10:37.589 "traddr": "10.0.0.1", 00:10:37.589 "trsvcid": "46900" 00:10:37.589 }, 00:10:37.589 "auth": { 00:10:37.589 "state": "completed", 00:10:37.589 "digest": "sha256", 00:10:37.589 "dhgroup": "ffdhe8192" 00:10:37.589 } 00:10:37.589 } 00:10:37.589 ]' 00:10:37.589 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:37.589 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:37.590 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:37.849 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:37.849 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:37.849 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:37.849 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:37.849 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:38.108 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:01:OGU3MTNlZDVjMDQ3NTFlNjNmOWQ1ZjM2ZDIzNTY0OGShwSVJ: --dhchap-ctrl-secret DHHC-1:02:N2UwYWNmZTg4NGUwYzQ4NTgyMjFjYjY1M2NjZDU0MzA3OGY2ZmFhYjFkNjdmOGNls9s4JA==: 00:10:38.675 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:38.675 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:38.675 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:10:38.675 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.675 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.675 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.675 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:38.675 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:38.675 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:38.934 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:10:38.934 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:38.934 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:38.934 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:38.934 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:38.934 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:38.934 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:38.934 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.934 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.934 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.934 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:38.935 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:39.502 00:10:39.502 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:39.502 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:39.502 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:39.761 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:39.761 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:39.761 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.761 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.761 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.761 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:39.761 { 00:10:39.761 "cntlid": 45, 00:10:39.761 "qid": 0, 00:10:39.761 "state": "enabled", 00:10:39.761 "thread": "nvmf_tgt_poll_group_000", 00:10:39.761 "listen_address": { 00:10:39.761 "trtype": "TCP", 00:10:39.761 "adrfam": "IPv4", 00:10:39.761 "traddr": "10.0.0.2", 00:10:39.761 "trsvcid": "4420" 00:10:39.761 }, 00:10:39.761 "peer_address": { 00:10:39.761 "trtype": "TCP", 00:10:39.761 "adrfam": "IPv4", 00:10:39.761 "traddr": "10.0.0.1", 00:10:39.761 "trsvcid": "44936" 00:10:39.761 }, 00:10:39.761 "auth": { 00:10:39.761 "state": "completed", 00:10:39.761 "digest": "sha256", 00:10:39.761 "dhgroup": "ffdhe8192" 00:10:39.761 } 00:10:39.761 } 00:10:39.761 ]' 00:10:39.761 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:40.019 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:40.019 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:40.019 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:40.019 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:40.019 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:40.019 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:40.019 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:40.276 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:02:ZmZmNThlNDI1ZjcyM2E1MGEwYTdkYWFjMzk2NmZkNDU3NWEyOTQxMzhhZTcwOTE0jsZ/MQ==: --dhchap-ctrl-secret DHHC-1:01:ZmJiOGM0NDBjYzRkZTQ0MmVlMGYyZjVjZTAyY2ZlOGP2BXJQ: 00:10:41.223 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:41.223 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:41.223 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:10:41.223 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.223 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.224 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.224 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:41.224 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:41.224 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:41.506 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:10:41.506 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:41.506 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:41.506 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:41.506 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:41.506 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:41.506 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key3 00:10:41.506 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.506 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.506 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.506 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:41.506 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:42.073 00:10:42.073 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:42.073 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:42.073 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:42.332 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:42.332 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:42.332 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.332 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.332 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.332 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:42.332 { 00:10:42.332 "cntlid": 47, 00:10:42.332 "qid": 0, 00:10:42.332 "state": "enabled", 00:10:42.332 "thread": "nvmf_tgt_poll_group_000", 00:10:42.332 "listen_address": { 00:10:42.332 "trtype": "TCP", 00:10:42.332 "adrfam": "IPv4", 00:10:42.332 "traddr": "10.0.0.2", 00:10:42.332 "trsvcid": "4420" 00:10:42.332 }, 00:10:42.332 "peer_address": { 00:10:42.332 "trtype": "TCP", 00:10:42.332 "adrfam": "IPv4", 00:10:42.332 "traddr": "10.0.0.1", 00:10:42.332 "trsvcid": "44954" 00:10:42.332 }, 00:10:42.332 "auth": { 00:10:42.332 "state": "completed", 00:10:42.332 "digest": "sha256", 00:10:42.332 "dhgroup": "ffdhe8192" 00:10:42.332 } 00:10:42.332 } 00:10:42.332 ]' 00:10:42.332 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:42.332 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:42.332 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:42.332 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:42.332 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:42.591 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:42.591 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:42.591 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:42.849 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:03:NTRjNzg5ZWU5OGQ1ZDRiYWJjMzBiZGRhODg3MzhiY2NmYzgzZjEzNTA2YmFiNmI2M2FmYzNhMjE0ODc2Mjg4ZB3NM1A=: 00:10:43.416 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:43.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:43.416 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:10:43.416 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.416 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.416 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.416 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:10:43.416 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:43.416 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:43.416 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:43.416 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:43.674 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:10:43.674 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:43.674 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:43.674 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:43.674 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:43.674 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:43.674 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:43.674 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.674 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.674 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.675 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:43.675 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:43.933 00:10:43.933 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:43.933 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:43.933 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:44.192 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:44.192 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:44.192 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.192 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.192 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.192 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:44.192 { 00:10:44.192 "cntlid": 49, 00:10:44.192 "qid": 0, 00:10:44.192 "state": "enabled", 00:10:44.192 "thread": "nvmf_tgt_poll_group_000", 00:10:44.192 "listen_address": { 00:10:44.192 "trtype": "TCP", 00:10:44.192 "adrfam": "IPv4", 00:10:44.192 "traddr": "10.0.0.2", 00:10:44.192 "trsvcid": "4420" 00:10:44.192 }, 00:10:44.192 "peer_address": { 00:10:44.192 "trtype": "TCP", 00:10:44.192 "adrfam": "IPv4", 00:10:44.192 "traddr": "10.0.0.1", 00:10:44.192 "trsvcid": "44974" 00:10:44.192 }, 00:10:44.192 "auth": { 00:10:44.192 "state": "completed", 00:10:44.192 "digest": "sha384", 00:10:44.192 "dhgroup": "null" 00:10:44.192 } 00:10:44.192 } 00:10:44.192 ]' 00:10:44.192 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:44.192 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:44.192 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:44.451 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:44.451 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:44.451 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:44.451 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:44.451 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:44.709 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:00:YTg5ODYxOTlkYjU2YmNlNDliNzI3MzVjM2Y0ZTVjYjQ5NmFkZDYwNmNmNTcxYWZkqfnZcw==: --dhchap-ctrl-secret DHHC-1:03:MWJhM2FjNDRiOGQxZjUzYTQ4OTlkMzQyYzkyMWEwN2U2YzQ1M2ZkZWRkMTEzYTRiZmMwMjc3ZjNhNTg0ZTAwN+fY0rE=: 00:10:45.277 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:45.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:45.277 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:10:45.277 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.277 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.277 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.277 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:45.277 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:45.277 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:45.536 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:10:45.536 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:45.536 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:45.536 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:45.536 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:45.536 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:45.536 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:45.536 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.536 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.536 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.537 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:45.537 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:46.104 00:10:46.104 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:46.104 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:46.104 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:46.363 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:46.363 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:46.363 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.363 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.363 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.363 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:46.363 { 00:10:46.363 "cntlid": 51, 00:10:46.363 "qid": 0, 00:10:46.363 "state": "enabled", 00:10:46.363 "thread": "nvmf_tgt_poll_group_000", 00:10:46.363 "listen_address": { 00:10:46.363 "trtype": "TCP", 00:10:46.363 "adrfam": "IPv4", 00:10:46.363 "traddr": "10.0.0.2", 00:10:46.363 "trsvcid": "4420" 00:10:46.363 }, 00:10:46.363 "peer_address": { 00:10:46.363 "trtype": "TCP", 00:10:46.363 "adrfam": "IPv4", 00:10:46.363 "traddr": "10.0.0.1", 00:10:46.363 "trsvcid": "44996" 00:10:46.363 }, 00:10:46.363 "auth": { 00:10:46.363 "state": "completed", 00:10:46.363 "digest": "sha384", 00:10:46.363 "dhgroup": "null" 00:10:46.363 } 00:10:46.363 } 00:10:46.363 ]' 00:10:46.363 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:46.363 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:46.363 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:46.363 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:46.363 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:46.363 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:46.363 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:46.363 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:46.622 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:01:OGU3MTNlZDVjMDQ3NTFlNjNmOWQ1ZjM2ZDIzNTY0OGShwSVJ: --dhchap-ctrl-secret DHHC-1:02:N2UwYWNmZTg4NGUwYzQ4NTgyMjFjYjY1M2NjZDU0MzA3OGY2ZmFhYjFkNjdmOGNls9s4JA==: 00:10:47.558 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:47.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:47.558 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:10:47.558 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.558 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.558 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.558 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:47.558 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:47.558 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:47.816 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:10:47.816 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:47.816 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:47.816 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:47.816 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:47.816 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:47.816 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:47.816 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.816 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.816 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.816 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:47.816 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:48.074 00:10:48.074 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:48.074 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:48.074 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:48.332 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:48.332 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:48.332 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.332 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.332 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.332 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:48.332 { 00:10:48.332 "cntlid": 53, 00:10:48.332 "qid": 0, 00:10:48.332 "state": "enabled", 00:10:48.332 "thread": "nvmf_tgt_poll_group_000", 00:10:48.332 "listen_address": { 00:10:48.332 "trtype": "TCP", 00:10:48.332 "adrfam": "IPv4", 00:10:48.332 "traddr": "10.0.0.2", 00:10:48.332 "trsvcid": "4420" 00:10:48.332 }, 00:10:48.332 "peer_address": { 00:10:48.332 "trtype": "TCP", 00:10:48.332 "adrfam": "IPv4", 00:10:48.333 "traddr": "10.0.0.1", 00:10:48.333 "trsvcid": "45022" 00:10:48.333 }, 00:10:48.333 "auth": { 00:10:48.333 "state": "completed", 00:10:48.333 "digest": "sha384", 00:10:48.333 "dhgroup": "null" 00:10:48.333 } 00:10:48.333 } 00:10:48.333 ]' 00:10:48.333 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:48.591 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:48.591 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:48.591 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:48.591 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:48.591 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:48.591 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:48.591 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:48.850 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:02:ZmZmNThlNDI1ZjcyM2E1MGEwYTdkYWFjMzk2NmZkNDU3NWEyOTQxMzhhZTcwOTE0jsZ/MQ==: --dhchap-ctrl-secret DHHC-1:01:ZmJiOGM0NDBjYzRkZTQ0MmVlMGYyZjVjZTAyY2ZlOGP2BXJQ: 00:10:49.787 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:49.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:49.787 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:10:49.787 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.787 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.787 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.787 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:49.787 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:49.787 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:49.787 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:10:49.787 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:49.787 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:49.787 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:49.787 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:49.787 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:49.787 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key3 00:10:49.787 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.787 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.787 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.787 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:49.787 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:50.390 00:10:50.390 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:50.390 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:50.390 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:50.660 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:50.660 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:50.660 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.660 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.660 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.660 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:50.660 { 00:10:50.660 "cntlid": 55, 00:10:50.660 "qid": 0, 00:10:50.660 "state": "enabled", 00:10:50.660 "thread": "nvmf_tgt_poll_group_000", 00:10:50.660 "listen_address": { 00:10:50.660 "trtype": "TCP", 00:10:50.660 "adrfam": "IPv4", 00:10:50.660 "traddr": "10.0.0.2", 00:10:50.660 "trsvcid": "4420" 00:10:50.660 }, 00:10:50.660 "peer_address": { 00:10:50.660 "trtype": "TCP", 00:10:50.660 "adrfam": "IPv4", 00:10:50.660 "traddr": "10.0.0.1", 00:10:50.660 "trsvcid": "54464" 00:10:50.660 }, 00:10:50.660 "auth": { 00:10:50.660 "state": "completed", 00:10:50.660 "digest": "sha384", 00:10:50.660 "dhgroup": "null" 00:10:50.660 } 00:10:50.660 } 00:10:50.660 ]' 00:10:50.660 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:50.660 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:50.660 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:50.660 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:50.660 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:50.660 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:50.660 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:50.660 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:50.919 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:03:NTRjNzg5ZWU5OGQ1ZDRiYWJjMzBiZGRhODg3MzhiY2NmYzgzZjEzNTA2YmFiNmI2M2FmYzNhMjE0ODc2Mjg4ZB3NM1A=: 00:10:51.857 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:51.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:51.857 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:10:51.857 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.857 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.857 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.857 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:51.857 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:51.857 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:51.857 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:51.857 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:10:51.857 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:51.857 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:51.857 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:51.857 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:51.857 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:51.857 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:51.857 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.857 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.857 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.857 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:51.857 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:52.445 00:10:52.445 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:52.445 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:52.445 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:52.704 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:52.704 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:52.704 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.704 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.704 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.704 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:52.704 { 00:10:52.705 "cntlid": 57, 00:10:52.705 "qid": 0, 00:10:52.705 "state": "enabled", 00:10:52.705 "thread": "nvmf_tgt_poll_group_000", 00:10:52.705 "listen_address": { 00:10:52.705 "trtype": "TCP", 00:10:52.705 "adrfam": "IPv4", 00:10:52.705 "traddr": "10.0.0.2", 00:10:52.705 "trsvcid": "4420" 00:10:52.705 }, 00:10:52.705 "peer_address": { 00:10:52.705 "trtype": "TCP", 00:10:52.705 "adrfam": "IPv4", 00:10:52.705 "traddr": "10.0.0.1", 00:10:52.705 "trsvcid": "54482" 00:10:52.705 }, 00:10:52.705 "auth": { 00:10:52.705 "state": "completed", 00:10:52.705 "digest": "sha384", 00:10:52.705 "dhgroup": "ffdhe2048" 00:10:52.705 } 00:10:52.705 } 00:10:52.705 ]' 00:10:52.705 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:52.705 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:52.705 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:52.705 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:52.705 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:52.705 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:52.705 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:52.705 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:52.964 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:00:YTg5ODYxOTlkYjU2YmNlNDliNzI3MzVjM2Y0ZTVjYjQ5NmFkZDYwNmNmNTcxYWZkqfnZcw==: --dhchap-ctrl-secret DHHC-1:03:MWJhM2FjNDRiOGQxZjUzYTQ4OTlkMzQyYzkyMWEwN2U2YzQ1M2ZkZWRkMTEzYTRiZmMwMjc3ZjNhNTg0ZTAwN+fY0rE=: 00:10:53.532 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:53.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:53.794 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:10:53.794 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.794 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.794 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.794 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:53.794 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:53.794 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:54.053 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:10:54.053 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:54.053 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:54.053 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:54.053 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:54.053 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:54.053 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:54.053 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.053 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.053 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.053 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:54.053 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:54.366 00:10:54.366 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:54.366 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:54.366 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:54.624 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:54.624 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:54.624 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.624 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.624 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.624 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:54.624 { 00:10:54.624 "cntlid": 59, 00:10:54.624 "qid": 0, 00:10:54.624 "state": "enabled", 00:10:54.624 "thread": "nvmf_tgt_poll_group_000", 00:10:54.624 "listen_address": { 00:10:54.624 "trtype": "TCP", 00:10:54.624 "adrfam": "IPv4", 00:10:54.624 "traddr": "10.0.0.2", 00:10:54.624 "trsvcid": "4420" 00:10:54.624 }, 00:10:54.624 "peer_address": { 00:10:54.624 "trtype": "TCP", 00:10:54.624 "adrfam": "IPv4", 00:10:54.624 "traddr": "10.0.0.1", 00:10:54.624 "trsvcid": "54510" 00:10:54.624 }, 00:10:54.624 "auth": { 00:10:54.624 "state": "completed", 00:10:54.624 "digest": "sha384", 00:10:54.624 "dhgroup": "ffdhe2048" 00:10:54.624 } 00:10:54.624 } 00:10:54.624 ]' 00:10:54.624 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:54.624 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:54.624 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:54.624 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:54.624 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:54.882 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:54.882 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:54.882 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:55.138 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:01:OGU3MTNlZDVjMDQ3NTFlNjNmOWQ1ZjM2ZDIzNTY0OGShwSVJ: --dhchap-ctrl-secret DHHC-1:02:N2UwYWNmZTg4NGUwYzQ4NTgyMjFjYjY1M2NjZDU0MzA3OGY2ZmFhYjFkNjdmOGNls9s4JA==: 00:10:56.070 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:56.070 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:56.070 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:10:56.070 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.070 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.070 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.070 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:56.070 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:56.070 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:56.635 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:10:56.635 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:56.635 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:56.635 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:56.635 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:56.635 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:56.635 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:56.635 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.635 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.635 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.635 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:56.635 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:56.893 00:10:56.893 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:56.893 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:56.893 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:57.150 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:57.150 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:57.150 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.150 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.150 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.150 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:57.150 { 00:10:57.150 "cntlid": 61, 00:10:57.150 "qid": 0, 00:10:57.150 "state": "enabled", 00:10:57.150 "thread": "nvmf_tgt_poll_group_000", 00:10:57.150 "listen_address": { 00:10:57.150 "trtype": "TCP", 00:10:57.150 "adrfam": "IPv4", 00:10:57.150 "traddr": "10.0.0.2", 00:10:57.150 "trsvcid": "4420" 00:10:57.150 }, 00:10:57.150 "peer_address": { 00:10:57.150 "trtype": "TCP", 00:10:57.150 "adrfam": "IPv4", 00:10:57.150 "traddr": "10.0.0.1", 00:10:57.150 "trsvcid": "54528" 00:10:57.150 }, 00:10:57.150 "auth": { 00:10:57.151 "state": "completed", 00:10:57.151 "digest": "sha384", 00:10:57.151 "dhgroup": "ffdhe2048" 00:10:57.151 } 00:10:57.151 } 00:10:57.151 ]' 00:10:57.151 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:57.151 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:57.151 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:57.408 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:57.408 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:57.408 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:57.408 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:57.408 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:57.666 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:02:ZmZmNThlNDI1ZjcyM2E1MGEwYTdkYWFjMzk2NmZkNDU3NWEyOTQxMzhhZTcwOTE0jsZ/MQ==: --dhchap-ctrl-secret DHHC-1:01:ZmJiOGM0NDBjYzRkZTQ0MmVlMGYyZjVjZTAyY2ZlOGP2BXJQ: 00:10:58.269 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:58.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:58.269 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:10:58.269 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.269 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.269 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.269 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:58.269 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:58.269 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:58.528 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:10:58.528 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:58.528 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:58.528 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:58.528 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:58.528 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:58.528 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key3 00:10:58.528 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.528 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.528 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.528 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:58.528 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:58.787 00:10:58.787 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:58.787 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:58.787 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:59.092 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:59.092 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:59.092 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.092 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.092 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.092 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:59.092 { 00:10:59.092 "cntlid": 63, 00:10:59.092 "qid": 0, 00:10:59.092 "state": "enabled", 00:10:59.092 "thread": "nvmf_tgt_poll_group_000", 00:10:59.092 "listen_address": { 00:10:59.092 "trtype": "TCP", 00:10:59.092 "adrfam": "IPv4", 00:10:59.092 "traddr": "10.0.0.2", 00:10:59.092 "trsvcid": "4420" 00:10:59.092 }, 00:10:59.092 "peer_address": { 00:10:59.092 "trtype": "TCP", 00:10:59.092 "adrfam": "IPv4", 00:10:59.092 "traddr": "10.0.0.1", 00:10:59.092 "trsvcid": "51564" 00:10:59.092 }, 00:10:59.092 "auth": { 00:10:59.092 "state": "completed", 00:10:59.092 "digest": "sha384", 00:10:59.092 "dhgroup": "ffdhe2048" 00:10:59.092 } 00:10:59.092 } 00:10:59.092 ]' 00:10:59.092 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:59.092 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:59.092 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:59.358 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:59.359 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:59.359 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:59.359 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:59.359 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:59.617 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:03:NTRjNzg5ZWU5OGQ1ZDRiYWJjMzBiZGRhODg3MzhiY2NmYzgzZjEzNTA2YmFiNmI2M2FmYzNhMjE0ODc2Mjg4ZB3NM1A=: 00:11:00.185 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:00.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:00.185 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:11:00.185 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.185 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.185 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.185 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:00.185 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:00.185 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:00.185 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:00.444 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:11:00.444 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:00.444 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:00.444 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:00.444 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:00.444 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:00.444 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:00.444 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.444 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.445 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.445 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:00.445 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:01.012 00:11:01.012 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:01.012 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:01.012 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:01.012 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:01.012 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:01.012 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.012 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.270 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.270 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:01.270 { 00:11:01.270 "cntlid": 65, 00:11:01.270 "qid": 0, 00:11:01.270 "state": "enabled", 00:11:01.270 "thread": "nvmf_tgt_poll_group_000", 00:11:01.270 "listen_address": { 00:11:01.270 "trtype": "TCP", 00:11:01.270 "adrfam": "IPv4", 00:11:01.270 "traddr": "10.0.0.2", 00:11:01.270 "trsvcid": "4420" 00:11:01.270 }, 00:11:01.270 "peer_address": { 00:11:01.270 "trtype": "TCP", 00:11:01.270 "adrfam": "IPv4", 00:11:01.270 "traddr": "10.0.0.1", 00:11:01.270 "trsvcid": "51588" 00:11:01.270 }, 00:11:01.270 "auth": { 00:11:01.270 "state": "completed", 00:11:01.270 "digest": "sha384", 00:11:01.270 "dhgroup": "ffdhe3072" 00:11:01.270 } 00:11:01.270 } 00:11:01.270 ]' 00:11:01.270 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:01.270 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:01.270 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:01.270 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:01.270 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:01.270 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:01.270 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:01.270 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:01.529 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:00:YTg5ODYxOTlkYjU2YmNlNDliNzI3MzVjM2Y0ZTVjYjQ5NmFkZDYwNmNmNTcxYWZkqfnZcw==: --dhchap-ctrl-secret DHHC-1:03:MWJhM2FjNDRiOGQxZjUzYTQ4OTlkMzQyYzkyMWEwN2U2YzQ1M2ZkZWRkMTEzYTRiZmMwMjc3ZjNhNTg0ZTAwN+fY0rE=: 00:11:02.462 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:02.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:02.462 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:11:02.462 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.462 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.462 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.462 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:02.462 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:02.462 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:02.462 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:11:02.462 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:02.462 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:02.462 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:02.462 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:02.462 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:02.462 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:02.462 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.462 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.462 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.462 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:02.462 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:03.028 00:11:03.028 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:03.028 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:03.028 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:03.028 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:03.028 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:03.028 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.028 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.028 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.028 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:03.028 { 00:11:03.028 "cntlid": 67, 00:11:03.028 "qid": 0, 00:11:03.028 "state": "enabled", 00:11:03.028 "thread": "nvmf_tgt_poll_group_000", 00:11:03.028 "listen_address": { 00:11:03.028 "trtype": "TCP", 00:11:03.028 "adrfam": "IPv4", 00:11:03.028 "traddr": "10.0.0.2", 00:11:03.028 "trsvcid": "4420" 00:11:03.028 }, 00:11:03.028 "peer_address": { 00:11:03.028 "trtype": "TCP", 00:11:03.028 "adrfam": "IPv4", 00:11:03.028 "traddr": "10.0.0.1", 00:11:03.028 "trsvcid": "51610" 00:11:03.028 }, 00:11:03.028 "auth": { 00:11:03.028 "state": "completed", 00:11:03.028 "digest": "sha384", 00:11:03.028 "dhgroup": "ffdhe3072" 00:11:03.028 } 00:11:03.028 } 00:11:03.028 ]' 00:11:03.028 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:03.285 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:03.285 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:03.285 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:03.285 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:03.285 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:03.285 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:03.285 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:03.543 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:01:OGU3MTNlZDVjMDQ3NTFlNjNmOWQ1ZjM2ZDIzNTY0OGShwSVJ: --dhchap-ctrl-secret DHHC-1:02:N2UwYWNmZTg4NGUwYzQ4NTgyMjFjYjY1M2NjZDU0MzA3OGY2ZmFhYjFkNjdmOGNls9s4JA==: 00:11:04.476 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:04.476 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:04.476 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:11:04.476 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.476 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.476 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.476 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:04.476 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:04.476 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:04.734 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:11:04.734 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:04.734 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:04.734 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:04.734 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:04.734 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:04.735 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:04.735 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.735 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.735 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.735 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:04.735 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:04.993 00:11:04.993 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:04.993 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:04.993 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:05.250 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:05.250 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:05.250 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.250 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.250 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.250 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:05.250 { 00:11:05.250 "cntlid": 69, 00:11:05.250 "qid": 0, 00:11:05.250 "state": "enabled", 00:11:05.250 "thread": "nvmf_tgt_poll_group_000", 00:11:05.250 "listen_address": { 00:11:05.250 "trtype": "TCP", 00:11:05.250 "adrfam": "IPv4", 00:11:05.250 "traddr": "10.0.0.2", 00:11:05.250 "trsvcid": "4420" 00:11:05.250 }, 00:11:05.250 "peer_address": { 00:11:05.250 "trtype": "TCP", 00:11:05.250 "adrfam": "IPv4", 00:11:05.250 "traddr": "10.0.0.1", 00:11:05.250 "trsvcid": "51644" 00:11:05.250 }, 00:11:05.250 "auth": { 00:11:05.250 "state": "completed", 00:11:05.250 "digest": "sha384", 00:11:05.250 "dhgroup": "ffdhe3072" 00:11:05.250 } 00:11:05.250 } 00:11:05.250 ]' 00:11:05.250 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:05.507 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:05.507 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:05.507 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:05.507 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:05.507 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:05.507 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:05.507 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:05.764 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:02:ZmZmNThlNDI1ZjcyM2E1MGEwYTdkYWFjMzk2NmZkNDU3NWEyOTQxMzhhZTcwOTE0jsZ/MQ==: --dhchap-ctrl-secret DHHC-1:01:ZmJiOGM0NDBjYzRkZTQ0MmVlMGYyZjVjZTAyY2ZlOGP2BXJQ: 00:11:06.697 10:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:06.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:06.697 10:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:11:06.697 10:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.697 10:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.697 10:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.697 10:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:06.697 10:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:06.697 10:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:06.697 10:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:11:06.697 10:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:06.697 10:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:06.697 10:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:06.697 10:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:06.697 10:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:06.697 10:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key3 00:11:06.697 10:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.697 10:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.697 10:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.697 10:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:06.697 10:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:07.294 00:11:07.294 10:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:07.294 10:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:07.294 10:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:07.553 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:07.553 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:07.553 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.553 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.553 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.553 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:07.553 { 00:11:07.553 "cntlid": 71, 00:11:07.553 "qid": 0, 00:11:07.553 "state": "enabled", 00:11:07.553 "thread": "nvmf_tgt_poll_group_000", 00:11:07.553 "listen_address": { 00:11:07.553 "trtype": "TCP", 00:11:07.553 "adrfam": "IPv4", 00:11:07.553 "traddr": "10.0.0.2", 00:11:07.553 "trsvcid": "4420" 00:11:07.553 }, 00:11:07.553 "peer_address": { 00:11:07.553 "trtype": "TCP", 00:11:07.553 "adrfam": "IPv4", 00:11:07.553 "traddr": "10.0.0.1", 00:11:07.553 "trsvcid": "51672" 00:11:07.553 }, 00:11:07.553 "auth": { 00:11:07.553 "state": "completed", 00:11:07.553 "digest": "sha384", 00:11:07.553 "dhgroup": "ffdhe3072" 00:11:07.553 } 00:11:07.553 } 00:11:07.553 ]' 00:11:07.553 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:07.553 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:07.553 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:07.553 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:07.553 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:07.553 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:07.553 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:07.553 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:07.811 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:03:NTRjNzg5ZWU5OGQ1ZDRiYWJjMzBiZGRhODg3MzhiY2NmYzgzZjEzNTA2YmFiNmI2M2FmYzNhMjE0ODc2Mjg4ZB3NM1A=: 00:11:08.379 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:08.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:08.638 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:11:08.638 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.638 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.638 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.638 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:08.638 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:08.638 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:08.638 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:08.898 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:11:08.898 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:08.898 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:08.898 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:08.898 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:08.898 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:08.898 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:08.898 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.898 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.898 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.898 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:08.898 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:09.157 00:11:09.157 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:09.157 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:09.157 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:09.416 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:09.416 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:09.416 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.416 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.416 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.416 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:09.416 { 00:11:09.416 "cntlid": 73, 00:11:09.416 "qid": 0, 00:11:09.416 "state": "enabled", 00:11:09.416 "thread": "nvmf_tgt_poll_group_000", 00:11:09.416 "listen_address": { 00:11:09.416 "trtype": "TCP", 00:11:09.416 "adrfam": "IPv4", 00:11:09.416 "traddr": "10.0.0.2", 00:11:09.416 "trsvcid": "4420" 00:11:09.416 }, 00:11:09.416 "peer_address": { 00:11:09.416 "trtype": "TCP", 00:11:09.416 "adrfam": "IPv4", 00:11:09.416 "traddr": "10.0.0.1", 00:11:09.416 "trsvcid": "45308" 00:11:09.416 }, 00:11:09.416 "auth": { 00:11:09.416 "state": "completed", 00:11:09.416 "digest": "sha384", 00:11:09.416 "dhgroup": "ffdhe4096" 00:11:09.416 } 00:11:09.416 } 00:11:09.416 ]' 00:11:09.416 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:09.416 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:09.416 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:09.416 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:09.416 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:09.675 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:09.675 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:09.675 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:09.934 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:00:YTg5ODYxOTlkYjU2YmNlNDliNzI3MzVjM2Y0ZTVjYjQ5NmFkZDYwNmNmNTcxYWZkqfnZcw==: --dhchap-ctrl-secret DHHC-1:03:MWJhM2FjNDRiOGQxZjUzYTQ4OTlkMzQyYzkyMWEwN2U2YzQ1M2ZkZWRkMTEzYTRiZmMwMjc3ZjNhNTg0ZTAwN+fY0rE=: 00:11:10.502 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:10.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:10.502 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:11:10.502 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.502 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.502 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.502 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:10.502 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:10.502 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:10.761 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:11:10.761 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:10.761 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:10.761 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:10.761 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:10.761 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:10.761 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:10.761 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.761 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.761 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.761 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:10.761 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:11.019 00:11:11.019 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:11.019 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:11.019 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:11.586 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:11.586 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:11.586 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.586 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.586 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.586 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:11.586 { 00:11:11.586 "cntlid": 75, 00:11:11.586 "qid": 0, 00:11:11.586 "state": "enabled", 00:11:11.586 "thread": "nvmf_tgt_poll_group_000", 00:11:11.586 "listen_address": { 00:11:11.586 "trtype": "TCP", 00:11:11.586 "adrfam": "IPv4", 00:11:11.586 "traddr": "10.0.0.2", 00:11:11.586 "trsvcid": "4420" 00:11:11.586 }, 00:11:11.586 "peer_address": { 00:11:11.586 "trtype": "TCP", 00:11:11.586 "adrfam": "IPv4", 00:11:11.586 "traddr": "10.0.0.1", 00:11:11.586 "trsvcid": "45328" 00:11:11.586 }, 00:11:11.586 "auth": { 00:11:11.586 "state": "completed", 00:11:11.586 "digest": "sha384", 00:11:11.586 "dhgroup": "ffdhe4096" 00:11:11.586 } 00:11:11.586 } 00:11:11.586 ]' 00:11:11.586 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:11.586 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:11.586 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:11.586 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:11.586 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:11.586 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:11.586 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:11.586 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:11.845 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:01:OGU3MTNlZDVjMDQ3NTFlNjNmOWQ1ZjM2ZDIzNTY0OGShwSVJ: --dhchap-ctrl-secret DHHC-1:02:N2UwYWNmZTg4NGUwYzQ4NTgyMjFjYjY1M2NjZDU0MzA3OGY2ZmFhYjFkNjdmOGNls9s4JA==: 00:11:12.412 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:12.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:12.412 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:11:12.412 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.412 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.412 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.412 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:12.412 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:12.412 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:12.670 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:11:12.670 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:12.670 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:12.670 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:12.670 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:12.670 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:12.670 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:12.670 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.670 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.670 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.670 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:12.670 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:12.929 00:11:12.929 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:12.929 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:12.929 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:13.194 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:13.194 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:13.194 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.194 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.194 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.194 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:13.194 { 00:11:13.194 "cntlid": 77, 00:11:13.194 "qid": 0, 00:11:13.194 "state": "enabled", 00:11:13.194 "thread": "nvmf_tgt_poll_group_000", 00:11:13.194 "listen_address": { 00:11:13.194 "trtype": "TCP", 00:11:13.194 "adrfam": "IPv4", 00:11:13.194 "traddr": "10.0.0.2", 00:11:13.194 "trsvcid": "4420" 00:11:13.194 }, 00:11:13.194 "peer_address": { 00:11:13.194 "trtype": "TCP", 00:11:13.194 "adrfam": "IPv4", 00:11:13.194 "traddr": "10.0.0.1", 00:11:13.194 "trsvcid": "45336" 00:11:13.194 }, 00:11:13.194 "auth": { 00:11:13.194 "state": "completed", 00:11:13.194 "digest": "sha384", 00:11:13.194 "dhgroup": "ffdhe4096" 00:11:13.194 } 00:11:13.194 } 00:11:13.194 ]' 00:11:13.194 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:13.472 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:13.472 10:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:13.472 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:13.472 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:13.472 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:13.472 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:13.472 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:13.731 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:02:ZmZmNThlNDI1ZjcyM2E1MGEwYTdkYWFjMzk2NmZkNDU3NWEyOTQxMzhhZTcwOTE0jsZ/MQ==: --dhchap-ctrl-secret DHHC-1:01:ZmJiOGM0NDBjYzRkZTQ0MmVlMGYyZjVjZTAyY2ZlOGP2BXJQ: 00:11:14.668 10:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:14.668 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:14.668 10:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:11:14.668 10:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.668 10:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.668 10:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.668 10:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:14.668 10:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:14.668 10:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:14.668 10:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:11:14.668 10:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:14.668 10:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:14.668 10:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:14.668 10:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:14.668 10:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:14.668 10:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key3 00:11:14.668 10:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.668 10:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.668 10:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.668 10:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:14.668 10:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:14.927 00:11:15.186 10:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:15.186 10:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:15.186 10:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:15.444 10:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:15.444 10:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:15.444 10:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.444 10:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.444 10:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.444 10:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:15.444 { 00:11:15.444 "cntlid": 79, 00:11:15.444 "qid": 0, 00:11:15.444 "state": "enabled", 00:11:15.444 "thread": "nvmf_tgt_poll_group_000", 00:11:15.444 "listen_address": { 00:11:15.444 "trtype": "TCP", 00:11:15.444 "adrfam": "IPv4", 00:11:15.444 "traddr": "10.0.0.2", 00:11:15.444 "trsvcid": "4420" 00:11:15.444 }, 00:11:15.444 "peer_address": { 00:11:15.445 "trtype": "TCP", 00:11:15.445 "adrfam": "IPv4", 00:11:15.445 "traddr": "10.0.0.1", 00:11:15.445 "trsvcid": "45360" 00:11:15.445 }, 00:11:15.445 "auth": { 00:11:15.445 "state": "completed", 00:11:15.445 "digest": "sha384", 00:11:15.445 "dhgroup": "ffdhe4096" 00:11:15.445 } 00:11:15.445 } 00:11:15.445 ]' 00:11:15.445 10:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:15.445 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:15.445 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:15.445 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:15.445 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:15.445 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:15.445 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:15.445 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:15.703 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:03:NTRjNzg5ZWU5OGQ1ZDRiYWJjMzBiZGRhODg3MzhiY2NmYzgzZjEzNTA2YmFiNmI2M2FmYzNhMjE0ODc2Mjg4ZB3NM1A=: 00:11:16.271 10:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:16.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:16.271 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:11:16.271 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.271 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.529 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.529 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:16.529 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:16.529 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:16.529 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:16.788 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:11:16.788 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:16.788 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:16.788 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:16.788 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:16.788 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:16.788 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:16.788 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.788 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.788 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.788 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:16.788 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:17.046 00:11:17.046 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:17.046 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:17.046 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:17.306 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:17.306 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:17.306 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.306 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.306 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.306 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:17.306 { 00:11:17.306 "cntlid": 81, 00:11:17.306 "qid": 0, 00:11:17.306 "state": "enabled", 00:11:17.306 "thread": "nvmf_tgt_poll_group_000", 00:11:17.306 "listen_address": { 00:11:17.306 "trtype": "TCP", 00:11:17.306 "adrfam": "IPv4", 00:11:17.306 "traddr": "10.0.0.2", 00:11:17.306 "trsvcid": "4420" 00:11:17.306 }, 00:11:17.306 "peer_address": { 00:11:17.306 "trtype": "TCP", 00:11:17.306 "adrfam": "IPv4", 00:11:17.306 "traddr": "10.0.0.1", 00:11:17.306 "trsvcid": "45378" 00:11:17.306 }, 00:11:17.306 "auth": { 00:11:17.306 "state": "completed", 00:11:17.306 "digest": "sha384", 00:11:17.306 "dhgroup": "ffdhe6144" 00:11:17.306 } 00:11:17.306 } 00:11:17.306 ]' 00:11:17.306 10:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:17.306 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:17.306 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:17.565 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:17.565 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:17.565 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:17.565 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:17.565 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:17.825 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:00:YTg5ODYxOTlkYjU2YmNlNDliNzI3MzVjM2Y0ZTVjYjQ5NmFkZDYwNmNmNTcxYWZkqfnZcw==: --dhchap-ctrl-secret DHHC-1:03:MWJhM2FjNDRiOGQxZjUzYTQ4OTlkMzQyYzkyMWEwN2U2YzQ1M2ZkZWRkMTEzYTRiZmMwMjc3ZjNhNTg0ZTAwN+fY0rE=: 00:11:18.393 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:18.393 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:18.393 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:11:18.393 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.393 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.393 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.393 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:18.393 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:18.393 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:18.960 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:11:18.960 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:18.960 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:18.960 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:18.960 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:18.960 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:18.960 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:18.960 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.960 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.960 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.960 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:18.961 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:19.249 00:11:19.249 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:19.249 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:19.249 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:19.507 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:19.507 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:19.507 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.507 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.507 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.507 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:19.507 { 00:11:19.507 "cntlid": 83, 00:11:19.507 "qid": 0, 00:11:19.507 "state": "enabled", 00:11:19.507 "thread": "nvmf_tgt_poll_group_000", 00:11:19.507 "listen_address": { 00:11:19.507 "trtype": "TCP", 00:11:19.507 "adrfam": "IPv4", 00:11:19.507 "traddr": "10.0.0.2", 00:11:19.507 "trsvcid": "4420" 00:11:19.507 }, 00:11:19.507 "peer_address": { 00:11:19.507 "trtype": "TCP", 00:11:19.507 "adrfam": "IPv4", 00:11:19.507 "traddr": "10.0.0.1", 00:11:19.507 "trsvcid": "45494" 00:11:19.507 }, 00:11:19.507 "auth": { 00:11:19.507 "state": "completed", 00:11:19.507 "digest": "sha384", 00:11:19.507 "dhgroup": "ffdhe6144" 00:11:19.507 } 00:11:19.507 } 00:11:19.507 ]' 00:11:19.507 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:19.508 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:19.508 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:19.508 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:19.508 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:19.508 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:19.508 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:19.508 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:19.766 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:01:OGU3MTNlZDVjMDQ3NTFlNjNmOWQ1ZjM2ZDIzNTY0OGShwSVJ: --dhchap-ctrl-secret DHHC-1:02:N2UwYWNmZTg4NGUwYzQ4NTgyMjFjYjY1M2NjZDU0MzA3OGY2ZmFhYjFkNjdmOGNls9s4JA==: 00:11:20.702 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:20.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:20.702 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:11:20.702 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.702 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.702 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.702 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:20.702 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:20.702 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:20.962 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:11:20.962 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:20.962 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:20.962 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:20.962 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:20.962 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:20.962 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:20.962 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.962 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.962 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.962 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:20.962 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:21.220 00:11:21.479 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:21.479 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:21.479 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:21.750 10:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:21.750 10:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:21.750 10:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.750 10:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.750 10:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.750 10:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:21.750 { 00:11:21.750 "cntlid": 85, 00:11:21.750 "qid": 0, 00:11:21.750 "state": "enabled", 00:11:21.750 "thread": "nvmf_tgt_poll_group_000", 00:11:21.750 "listen_address": { 00:11:21.750 "trtype": "TCP", 00:11:21.750 "adrfam": "IPv4", 00:11:21.750 "traddr": "10.0.0.2", 00:11:21.750 "trsvcid": "4420" 00:11:21.750 }, 00:11:21.750 "peer_address": { 00:11:21.750 "trtype": "TCP", 00:11:21.750 "adrfam": "IPv4", 00:11:21.750 "traddr": "10.0.0.1", 00:11:21.750 "trsvcid": "45522" 00:11:21.750 }, 00:11:21.750 "auth": { 00:11:21.750 "state": "completed", 00:11:21.750 "digest": "sha384", 00:11:21.750 "dhgroup": "ffdhe6144" 00:11:21.750 } 00:11:21.750 } 00:11:21.750 ]' 00:11:21.750 10:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:21.750 10:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:21.750 10:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:21.750 10:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:21.750 10:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:22.023 10:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:22.023 10:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:22.023 10:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:22.281 10:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:02:ZmZmNThlNDI1ZjcyM2E1MGEwYTdkYWFjMzk2NmZkNDU3NWEyOTQxMzhhZTcwOTE0jsZ/MQ==: --dhchap-ctrl-secret DHHC-1:01:ZmJiOGM0NDBjYzRkZTQ0MmVlMGYyZjVjZTAyY2ZlOGP2BXJQ: 00:11:22.849 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:22.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:22.849 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:11:22.849 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.849 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.849 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.849 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:22.849 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:22.849 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:23.108 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:11:23.108 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:23.108 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:23.108 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:23.108 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:23.108 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:23.108 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key3 00:11:23.108 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.108 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.108 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.108 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:23.108 10:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:23.676 00:11:23.676 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:23.676 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:23.676 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:23.935 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:23.935 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:23.935 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.935 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.935 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.935 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:23.935 { 00:11:23.935 "cntlid": 87, 00:11:23.935 "qid": 0, 00:11:23.935 "state": "enabled", 00:11:23.935 "thread": "nvmf_tgt_poll_group_000", 00:11:23.935 "listen_address": { 00:11:23.935 "trtype": "TCP", 00:11:23.935 "adrfam": "IPv4", 00:11:23.935 "traddr": "10.0.0.2", 00:11:23.935 "trsvcid": "4420" 00:11:23.935 }, 00:11:23.935 "peer_address": { 00:11:23.935 "trtype": "TCP", 00:11:23.935 "adrfam": "IPv4", 00:11:23.935 "traddr": "10.0.0.1", 00:11:23.935 "trsvcid": "45552" 00:11:23.935 }, 00:11:23.935 "auth": { 00:11:23.935 "state": "completed", 00:11:23.935 "digest": "sha384", 00:11:23.935 "dhgroup": "ffdhe6144" 00:11:23.935 } 00:11:23.935 } 00:11:23.935 ]' 00:11:23.935 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:23.935 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:23.935 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:23.935 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:23.935 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:23.935 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:23.935 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:23.935 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:24.503 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:03:NTRjNzg5ZWU5OGQ1ZDRiYWJjMzBiZGRhODg3MzhiY2NmYzgzZjEzNTA2YmFiNmI2M2FmYzNhMjE0ODc2Mjg4ZB3NM1A=: 00:11:25.071 10:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:25.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:25.071 10:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:11:25.071 10:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.071 10:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.071 10:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.071 10:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:25.071 10:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:25.071 10:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:25.071 10:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:25.330 10:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:11:25.330 10:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:25.330 10:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:25.330 10:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:25.330 10:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:25.330 10:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:25.330 10:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:25.330 10:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.330 10:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.330 10:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.330 10:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:25.330 10:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:25.898 00:11:25.898 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:25.898 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:25.898 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:26.179 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:26.179 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:26.179 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.179 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.179 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.179 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:26.179 { 00:11:26.179 "cntlid": 89, 00:11:26.179 "qid": 0, 00:11:26.179 "state": "enabled", 00:11:26.179 "thread": "nvmf_tgt_poll_group_000", 00:11:26.179 "listen_address": { 00:11:26.179 "trtype": "TCP", 00:11:26.179 "adrfam": "IPv4", 00:11:26.179 "traddr": "10.0.0.2", 00:11:26.179 "trsvcid": "4420" 00:11:26.179 }, 00:11:26.179 "peer_address": { 00:11:26.179 "trtype": "TCP", 00:11:26.179 "adrfam": "IPv4", 00:11:26.179 "traddr": "10.0.0.1", 00:11:26.179 "trsvcid": "45586" 00:11:26.179 }, 00:11:26.179 "auth": { 00:11:26.179 "state": "completed", 00:11:26.179 "digest": "sha384", 00:11:26.179 "dhgroup": "ffdhe8192" 00:11:26.179 } 00:11:26.179 } 00:11:26.179 ]' 00:11:26.179 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:26.179 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:26.179 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:26.179 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:26.179 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:26.453 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:26.453 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:26.453 10:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:26.711 10:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:00:YTg5ODYxOTlkYjU2YmNlNDliNzI3MzVjM2Y0ZTVjYjQ5NmFkZDYwNmNmNTcxYWZkqfnZcw==: --dhchap-ctrl-secret DHHC-1:03:MWJhM2FjNDRiOGQxZjUzYTQ4OTlkMzQyYzkyMWEwN2U2YzQ1M2ZkZWRkMTEzYTRiZmMwMjc3ZjNhNTg0ZTAwN+fY0rE=: 00:11:27.279 10:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:27.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:27.279 10:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:11:27.279 10:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.279 10:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.279 10:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.279 10:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:27.279 10:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:27.279 10:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:27.540 10:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:11:27.540 10:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:27.540 10:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:27.540 10:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:27.540 10:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:27.540 10:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:27.540 10:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:27.540 10:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.540 10:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.540 10:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.540 10:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:27.540 10:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.477 00:11:28.477 10:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:28.477 10:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:28.477 10:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:28.477 10:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:28.477 10:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:28.477 10:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.477 10:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.477 10:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.477 10:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:28.477 { 00:11:28.477 "cntlid": 91, 00:11:28.477 "qid": 0, 00:11:28.477 "state": "enabled", 00:11:28.477 "thread": "nvmf_tgt_poll_group_000", 00:11:28.477 "listen_address": { 00:11:28.477 "trtype": "TCP", 00:11:28.477 "adrfam": "IPv4", 00:11:28.477 "traddr": "10.0.0.2", 00:11:28.477 "trsvcid": "4420" 00:11:28.477 }, 00:11:28.477 "peer_address": { 00:11:28.477 "trtype": "TCP", 00:11:28.477 "adrfam": "IPv4", 00:11:28.477 "traddr": "10.0.0.1", 00:11:28.477 "trsvcid": "45608" 00:11:28.477 }, 00:11:28.477 "auth": { 00:11:28.477 "state": "completed", 00:11:28.477 "digest": "sha384", 00:11:28.477 "dhgroup": "ffdhe8192" 00:11:28.477 } 00:11:28.477 } 00:11:28.477 ]' 00:11:28.477 10:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:28.736 10:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:28.736 10:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:28.736 10:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:28.736 10:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:28.736 10:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:28.736 10:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:28.736 10:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:28.995 10:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:01:OGU3MTNlZDVjMDQ3NTFlNjNmOWQ1ZjM2ZDIzNTY0OGShwSVJ: --dhchap-ctrl-secret DHHC-1:02:N2UwYWNmZTg4NGUwYzQ4NTgyMjFjYjY1M2NjZDU0MzA3OGY2ZmFhYjFkNjdmOGNls9s4JA==: 00:11:29.563 10:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:29.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:29.822 10:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:11:29.822 10:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.822 10:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.822 10:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.822 10:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:29.822 10:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:29.822 10:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:30.081 10:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:11:30.082 10:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:30.082 10:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:30.082 10:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:30.082 10:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:30.082 10:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:30.082 10:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.082 10:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.082 10:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.082 10:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.082 10:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.082 10:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.650 00:11:30.650 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:30.650 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:30.650 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:30.908 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:30.908 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:30.908 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.908 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.908 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.908 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:30.908 { 00:11:30.908 "cntlid": 93, 00:11:30.908 "qid": 0, 00:11:30.908 "state": "enabled", 00:11:30.908 "thread": "nvmf_tgt_poll_group_000", 00:11:30.908 "listen_address": { 00:11:30.908 "trtype": "TCP", 00:11:30.908 "adrfam": "IPv4", 00:11:30.908 "traddr": "10.0.0.2", 00:11:30.908 "trsvcid": "4420" 00:11:30.908 }, 00:11:30.908 "peer_address": { 00:11:30.908 "trtype": "TCP", 00:11:30.908 "adrfam": "IPv4", 00:11:30.908 "traddr": "10.0.0.1", 00:11:30.908 "trsvcid": "58836" 00:11:30.908 }, 00:11:30.908 "auth": { 00:11:30.908 "state": "completed", 00:11:30.908 "digest": "sha384", 00:11:30.908 "dhgroup": "ffdhe8192" 00:11:30.908 } 00:11:30.908 } 00:11:30.908 ]' 00:11:30.908 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:30.908 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:30.908 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:30.908 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:30.908 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:30.908 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:30.909 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:30.909 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:31.167 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:02:ZmZmNThlNDI1ZjcyM2E1MGEwYTdkYWFjMzk2NmZkNDU3NWEyOTQxMzhhZTcwOTE0jsZ/MQ==: --dhchap-ctrl-secret DHHC-1:01:ZmJiOGM0NDBjYzRkZTQ0MmVlMGYyZjVjZTAyY2ZlOGP2BXJQ: 00:11:32.119 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:32.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:32.120 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:11:32.120 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.120 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.120 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.120 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:32.120 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:32.120 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:32.120 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:11:32.120 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:32.120 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:32.120 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:32.120 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:32.120 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:32.120 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key3 00:11:32.120 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.120 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.378 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.378 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:32.378 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:32.945 00:11:32.945 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:32.945 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:32.945 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:33.204 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:33.204 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:33.204 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.204 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.204 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.204 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:33.204 { 00:11:33.204 "cntlid": 95, 00:11:33.204 "qid": 0, 00:11:33.204 "state": "enabled", 00:11:33.204 "thread": "nvmf_tgt_poll_group_000", 00:11:33.204 "listen_address": { 00:11:33.204 "trtype": "TCP", 00:11:33.204 "adrfam": "IPv4", 00:11:33.204 "traddr": "10.0.0.2", 00:11:33.204 "trsvcid": "4420" 00:11:33.204 }, 00:11:33.204 "peer_address": { 00:11:33.204 "trtype": "TCP", 00:11:33.204 "adrfam": "IPv4", 00:11:33.204 "traddr": "10.0.0.1", 00:11:33.204 "trsvcid": "58874" 00:11:33.204 }, 00:11:33.204 "auth": { 00:11:33.204 "state": "completed", 00:11:33.204 "digest": "sha384", 00:11:33.204 "dhgroup": "ffdhe8192" 00:11:33.204 } 00:11:33.204 } 00:11:33.204 ]' 00:11:33.204 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:33.204 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:33.204 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:33.463 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:33.463 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:33.463 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:33.463 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:33.463 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:33.722 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:03:NTRjNzg5ZWU5OGQ1ZDRiYWJjMzBiZGRhODg3MzhiY2NmYzgzZjEzNTA2YmFiNmI2M2FmYzNhMjE0ODc2Mjg4ZB3NM1A=: 00:11:34.288 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:34.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:34.288 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:11:34.288 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.288 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.289 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.289 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:11:34.289 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:34.289 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:34.289 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:34.289 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:34.547 10:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:11:34.547 10:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:34.547 10:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:34.547 10:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:34.547 10:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:34.547 10:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:34.547 10:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:34.547 10:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.547 10:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.547 10:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.547 10:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:34.547 10:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:34.806 00:11:34.806 10:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:34.806 10:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:34.806 10:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:35.064 10:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:35.064 10:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:35.064 10:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.064 10:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.064 10:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.064 10:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:35.064 { 00:11:35.064 "cntlid": 97, 00:11:35.064 "qid": 0, 00:11:35.064 "state": "enabled", 00:11:35.064 "thread": "nvmf_tgt_poll_group_000", 00:11:35.064 "listen_address": { 00:11:35.064 "trtype": "TCP", 00:11:35.064 "adrfam": "IPv4", 00:11:35.064 "traddr": "10.0.0.2", 00:11:35.064 "trsvcid": "4420" 00:11:35.064 }, 00:11:35.064 "peer_address": { 00:11:35.064 "trtype": "TCP", 00:11:35.064 "adrfam": "IPv4", 00:11:35.064 "traddr": "10.0.0.1", 00:11:35.064 "trsvcid": "58900" 00:11:35.064 }, 00:11:35.064 "auth": { 00:11:35.064 "state": "completed", 00:11:35.064 "digest": "sha512", 00:11:35.064 "dhgroup": "null" 00:11:35.064 } 00:11:35.064 } 00:11:35.064 ]' 00:11:35.064 10:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:35.323 10:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:35.323 10:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:35.323 10:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:35.323 10:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:35.323 10:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:35.323 10:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:35.323 10:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:35.581 10:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:00:YTg5ODYxOTlkYjU2YmNlNDliNzI3MzVjM2Y0ZTVjYjQ5NmFkZDYwNmNmNTcxYWZkqfnZcw==: --dhchap-ctrl-secret DHHC-1:03:MWJhM2FjNDRiOGQxZjUzYTQ4OTlkMzQyYzkyMWEwN2U2YzQ1M2ZkZWRkMTEzYTRiZmMwMjc3ZjNhNTg0ZTAwN+fY0rE=: 00:11:36.516 10:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:36.516 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:36.516 10:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:11:36.516 10:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.516 10:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.516 10:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.516 10:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:36.516 10:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:36.516 10:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:36.516 10:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:11:36.516 10:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:36.516 10:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:36.516 10:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:36.516 10:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:36.516 10:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:36.516 10:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:36.516 10:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.516 10:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.516 10:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.516 10:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:36.516 10:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:36.775 00:11:37.034 10:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:37.034 10:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:37.034 10:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:37.034 10:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:37.034 10:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:37.034 10:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.034 10:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.293 10:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.293 10:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:37.293 { 00:11:37.293 "cntlid": 99, 00:11:37.293 "qid": 0, 00:11:37.293 "state": "enabled", 00:11:37.293 "thread": "nvmf_tgt_poll_group_000", 00:11:37.293 "listen_address": { 00:11:37.293 "trtype": "TCP", 00:11:37.293 "adrfam": "IPv4", 00:11:37.293 "traddr": "10.0.0.2", 00:11:37.293 "trsvcid": "4420" 00:11:37.293 }, 00:11:37.293 "peer_address": { 00:11:37.293 "trtype": "TCP", 00:11:37.293 "adrfam": "IPv4", 00:11:37.293 "traddr": "10.0.0.1", 00:11:37.293 "trsvcid": "58934" 00:11:37.293 }, 00:11:37.293 "auth": { 00:11:37.293 "state": "completed", 00:11:37.293 "digest": "sha512", 00:11:37.293 "dhgroup": "null" 00:11:37.293 } 00:11:37.293 } 00:11:37.293 ]' 00:11:37.293 10:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:37.293 10:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:37.293 10:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:37.293 10:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:37.293 10:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:37.293 10:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:37.293 10:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:37.293 10:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:37.552 10:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:01:OGU3MTNlZDVjMDQ3NTFlNjNmOWQ1ZjM2ZDIzNTY0OGShwSVJ: --dhchap-ctrl-secret DHHC-1:02:N2UwYWNmZTg4NGUwYzQ4NTgyMjFjYjY1M2NjZDU0MzA3OGY2ZmFhYjFkNjdmOGNls9s4JA==: 00:11:38.119 10:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:38.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:38.119 10:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:11:38.119 10:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.119 10:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.119 10:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.119 10:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:38.119 10:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:38.119 10:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:38.378 10:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:11:38.378 10:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:38.378 10:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:38.378 10:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:38.378 10:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:38.378 10:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:38.379 10:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:38.379 10:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.379 10:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.379 10:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.379 10:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:38.379 10:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:38.946 00:11:38.946 10:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:38.946 10:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:38.946 10:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:39.204 10:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:39.204 10:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:39.204 10:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.204 10:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.204 10:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.205 10:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:39.205 { 00:11:39.205 "cntlid": 101, 00:11:39.205 "qid": 0, 00:11:39.205 "state": "enabled", 00:11:39.205 "thread": "nvmf_tgt_poll_group_000", 00:11:39.205 "listen_address": { 00:11:39.205 "trtype": "TCP", 00:11:39.205 "adrfam": "IPv4", 00:11:39.205 "traddr": "10.0.0.2", 00:11:39.205 "trsvcid": "4420" 00:11:39.205 }, 00:11:39.205 "peer_address": { 00:11:39.205 "trtype": "TCP", 00:11:39.205 "adrfam": "IPv4", 00:11:39.205 "traddr": "10.0.0.1", 00:11:39.205 "trsvcid": "33238" 00:11:39.205 }, 00:11:39.205 "auth": { 00:11:39.205 "state": "completed", 00:11:39.205 "digest": "sha512", 00:11:39.205 "dhgroup": "null" 00:11:39.205 } 00:11:39.205 } 00:11:39.205 ]' 00:11:39.205 10:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:39.205 10:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:39.205 10:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:39.205 10:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:39.205 10:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:39.205 10:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:39.205 10:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:39.205 10:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:39.463 10:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:02:ZmZmNThlNDI1ZjcyM2E1MGEwYTdkYWFjMzk2NmZkNDU3NWEyOTQxMzhhZTcwOTE0jsZ/MQ==: --dhchap-ctrl-secret DHHC-1:01:ZmJiOGM0NDBjYzRkZTQ0MmVlMGYyZjVjZTAyY2ZlOGP2BXJQ: 00:11:40.399 10:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:40.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:40.399 10:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:11:40.399 10:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.399 10:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.399 10:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.399 10:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:40.400 10:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:40.400 10:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:40.400 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:11:40.400 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:40.400 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:40.400 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:40.400 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:40.400 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:40.400 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key3 00:11:40.400 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.400 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.400 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.400 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:40.400 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:40.967 00:11:40.967 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:40.967 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:40.967 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:41.226 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:41.226 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:41.226 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.226 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.226 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.226 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:41.226 { 00:11:41.226 "cntlid": 103, 00:11:41.226 "qid": 0, 00:11:41.226 "state": "enabled", 00:11:41.226 "thread": "nvmf_tgt_poll_group_000", 00:11:41.226 "listen_address": { 00:11:41.226 "trtype": "TCP", 00:11:41.226 "adrfam": "IPv4", 00:11:41.226 "traddr": "10.0.0.2", 00:11:41.226 "trsvcid": "4420" 00:11:41.226 }, 00:11:41.226 "peer_address": { 00:11:41.226 "trtype": "TCP", 00:11:41.226 "adrfam": "IPv4", 00:11:41.226 "traddr": "10.0.0.1", 00:11:41.226 "trsvcid": "33272" 00:11:41.226 }, 00:11:41.226 "auth": { 00:11:41.226 "state": "completed", 00:11:41.226 "digest": "sha512", 00:11:41.226 "dhgroup": "null" 00:11:41.226 } 00:11:41.226 } 00:11:41.226 ]' 00:11:41.226 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:41.226 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:41.226 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:41.226 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:41.226 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:41.226 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:41.226 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:41.226 10:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:41.486 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:03:NTRjNzg5ZWU5OGQ1ZDRiYWJjMzBiZGRhODg3MzhiY2NmYzgzZjEzNTA2YmFiNmI2M2FmYzNhMjE0ODc2Mjg4ZB3NM1A=: 00:11:42.422 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:42.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:42.422 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:11:42.422 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.422 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.422 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.422 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:42.422 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:42.422 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:42.422 10:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:42.680 10:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:11:42.680 10:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:42.680 10:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:42.680 10:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:42.680 10:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:42.680 10:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:42.681 10:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:42.681 10:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.681 10:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.681 10:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.681 10:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:42.681 10:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:42.939 00:11:42.939 10:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:42.939 10:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:42.939 10:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:43.198 10:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:43.198 10:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:43.198 10:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.198 10:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.198 10:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.198 10:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:43.198 { 00:11:43.198 "cntlid": 105, 00:11:43.198 "qid": 0, 00:11:43.198 "state": "enabled", 00:11:43.198 "thread": "nvmf_tgt_poll_group_000", 00:11:43.198 "listen_address": { 00:11:43.198 "trtype": "TCP", 00:11:43.198 "adrfam": "IPv4", 00:11:43.198 "traddr": "10.0.0.2", 00:11:43.198 "trsvcid": "4420" 00:11:43.198 }, 00:11:43.198 "peer_address": { 00:11:43.198 "trtype": "TCP", 00:11:43.198 "adrfam": "IPv4", 00:11:43.198 "traddr": "10.0.0.1", 00:11:43.198 "trsvcid": "33302" 00:11:43.198 }, 00:11:43.198 "auth": { 00:11:43.198 "state": "completed", 00:11:43.198 "digest": "sha512", 00:11:43.198 "dhgroup": "ffdhe2048" 00:11:43.198 } 00:11:43.198 } 00:11:43.198 ]' 00:11:43.198 10:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:43.198 10:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:43.198 10:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:43.457 10:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:43.457 10:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:43.457 10:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:43.457 10:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:43.457 10:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:43.716 10:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:00:YTg5ODYxOTlkYjU2YmNlNDliNzI3MzVjM2Y0ZTVjYjQ5NmFkZDYwNmNmNTcxYWZkqfnZcw==: --dhchap-ctrl-secret DHHC-1:03:MWJhM2FjNDRiOGQxZjUzYTQ4OTlkMzQyYzkyMWEwN2U2YzQ1M2ZkZWRkMTEzYTRiZmMwMjc3ZjNhNTg0ZTAwN+fY0rE=: 00:11:44.283 10:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:44.283 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:44.283 10:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:11:44.283 10:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.283 10:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.283 10:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.283 10:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:44.283 10:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:44.283 10:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:44.542 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:11:44.542 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:44.542 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:44.542 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:44.542 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:44.542 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:44.542 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:44.542 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.542 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.542 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.542 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:44.542 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:45.110 00:11:45.111 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:45.111 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:45.111 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:45.111 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:45.111 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:45.111 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.111 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.370 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.370 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:45.370 { 00:11:45.370 "cntlid": 107, 00:11:45.370 "qid": 0, 00:11:45.370 "state": "enabled", 00:11:45.370 "thread": "nvmf_tgt_poll_group_000", 00:11:45.370 "listen_address": { 00:11:45.370 "trtype": "TCP", 00:11:45.370 "adrfam": "IPv4", 00:11:45.370 "traddr": "10.0.0.2", 00:11:45.370 "trsvcid": "4420" 00:11:45.370 }, 00:11:45.370 "peer_address": { 00:11:45.370 "trtype": "TCP", 00:11:45.370 "adrfam": "IPv4", 00:11:45.370 "traddr": "10.0.0.1", 00:11:45.370 "trsvcid": "33326" 00:11:45.370 }, 00:11:45.370 "auth": { 00:11:45.370 "state": "completed", 00:11:45.370 "digest": "sha512", 00:11:45.370 "dhgroup": "ffdhe2048" 00:11:45.370 } 00:11:45.370 } 00:11:45.370 ]' 00:11:45.370 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:45.370 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:45.370 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:45.371 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:45.371 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:45.371 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:45.371 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:45.371 10:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:45.630 10:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:01:OGU3MTNlZDVjMDQ3NTFlNjNmOWQ1ZjM2ZDIzNTY0OGShwSVJ: --dhchap-ctrl-secret DHHC-1:02:N2UwYWNmZTg4NGUwYzQ4NTgyMjFjYjY1M2NjZDU0MzA3OGY2ZmFhYjFkNjdmOGNls9s4JA==: 00:11:46.198 10:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:46.198 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:46.198 10:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:11:46.198 10:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.198 10:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.198 10:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.198 10:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:46.198 10:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:46.198 10:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:46.457 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:11:46.457 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:46.457 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:46.457 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:46.457 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:46.457 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:46.457 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:46.457 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.457 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.457 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.457 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:46.457 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:47.024 00:11:47.024 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:47.024 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:47.024 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:47.283 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:47.283 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:47.283 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.283 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.283 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.283 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:47.283 { 00:11:47.283 "cntlid": 109, 00:11:47.283 "qid": 0, 00:11:47.283 "state": "enabled", 00:11:47.283 "thread": "nvmf_tgt_poll_group_000", 00:11:47.283 "listen_address": { 00:11:47.283 "trtype": "TCP", 00:11:47.283 "adrfam": "IPv4", 00:11:47.283 "traddr": "10.0.0.2", 00:11:47.283 "trsvcid": "4420" 00:11:47.283 }, 00:11:47.283 "peer_address": { 00:11:47.283 "trtype": "TCP", 00:11:47.283 "adrfam": "IPv4", 00:11:47.283 "traddr": "10.0.0.1", 00:11:47.283 "trsvcid": "33342" 00:11:47.283 }, 00:11:47.283 "auth": { 00:11:47.283 "state": "completed", 00:11:47.283 "digest": "sha512", 00:11:47.283 "dhgroup": "ffdhe2048" 00:11:47.283 } 00:11:47.283 } 00:11:47.283 ]' 00:11:47.283 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:47.283 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:47.283 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:47.283 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:47.283 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:47.283 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:47.283 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:47.283 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:47.556 10:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:02:ZmZmNThlNDI1ZjcyM2E1MGEwYTdkYWFjMzk2NmZkNDU3NWEyOTQxMzhhZTcwOTE0jsZ/MQ==: --dhchap-ctrl-secret DHHC-1:01:ZmJiOGM0NDBjYzRkZTQ0MmVlMGYyZjVjZTAyY2ZlOGP2BXJQ: 00:11:48.491 10:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:48.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:48.491 10:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:11:48.491 10:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.491 10:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.491 10:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.491 10:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:48.491 10:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:48.491 10:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:48.491 10:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:11:48.491 10:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:48.491 10:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:48.491 10:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:48.491 10:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:48.491 10:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:48.491 10:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key3 00:11:48.491 10:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.491 10:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.491 10:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.491 10:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:48.491 10:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:49.055 00:11:49.055 10:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:49.055 10:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:49.055 10:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:49.314 10:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:49.314 10:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:49.314 10:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.314 10:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.314 10:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.314 10:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:49.314 { 00:11:49.314 "cntlid": 111, 00:11:49.314 "qid": 0, 00:11:49.314 "state": "enabled", 00:11:49.314 "thread": "nvmf_tgt_poll_group_000", 00:11:49.314 "listen_address": { 00:11:49.314 "trtype": "TCP", 00:11:49.314 "adrfam": "IPv4", 00:11:49.314 "traddr": "10.0.0.2", 00:11:49.314 "trsvcid": "4420" 00:11:49.314 }, 00:11:49.314 "peer_address": { 00:11:49.314 "trtype": "TCP", 00:11:49.314 "adrfam": "IPv4", 00:11:49.314 "traddr": "10.0.0.1", 00:11:49.314 "trsvcid": "59396" 00:11:49.314 }, 00:11:49.314 "auth": { 00:11:49.314 "state": "completed", 00:11:49.314 "digest": "sha512", 00:11:49.314 "dhgroup": "ffdhe2048" 00:11:49.314 } 00:11:49.314 } 00:11:49.314 ]' 00:11:49.314 10:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:49.314 10:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:49.314 10:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:49.314 10:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:49.314 10:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:49.314 10:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:49.314 10:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:49.314 10:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:49.573 10:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:03:NTRjNzg5ZWU5OGQ1ZDRiYWJjMzBiZGRhODg3MzhiY2NmYzgzZjEzNTA2YmFiNmI2M2FmYzNhMjE0ODc2Mjg4ZB3NM1A=: 00:11:50.508 10:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:50.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:50.508 10:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:11:50.508 10:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.508 10:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.508 10:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.508 10:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:50.508 10:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:50.508 10:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:50.508 10:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:50.508 10:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:11:50.508 10:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:50.508 10:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:50.508 10:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:50.508 10:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:50.508 10:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:50.508 10:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:50.508 10:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.508 10:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.508 10:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.508 10:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:50.508 10:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:50.766 00:11:50.766 10:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:50.766 10:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:50.766 10:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:51.103 10:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:51.103 10:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:51.103 10:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.103 10:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.103 10:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.103 10:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:51.103 { 00:11:51.103 "cntlid": 113, 00:11:51.103 "qid": 0, 00:11:51.103 "state": "enabled", 00:11:51.103 "thread": "nvmf_tgt_poll_group_000", 00:11:51.103 "listen_address": { 00:11:51.103 "trtype": "TCP", 00:11:51.103 "adrfam": "IPv4", 00:11:51.103 "traddr": "10.0.0.2", 00:11:51.103 "trsvcid": "4420" 00:11:51.103 }, 00:11:51.103 "peer_address": { 00:11:51.103 "trtype": "TCP", 00:11:51.103 "adrfam": "IPv4", 00:11:51.103 "traddr": "10.0.0.1", 00:11:51.103 "trsvcid": "59418" 00:11:51.103 }, 00:11:51.103 "auth": { 00:11:51.103 "state": "completed", 00:11:51.103 "digest": "sha512", 00:11:51.103 "dhgroup": "ffdhe3072" 00:11:51.103 } 00:11:51.103 } 00:11:51.103 ]' 00:11:51.103 10:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:51.103 10:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:51.103 10:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:51.394 10:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:51.394 10:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:51.394 10:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:51.394 10:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:51.394 10:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:51.656 10:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:00:YTg5ODYxOTlkYjU2YmNlNDliNzI3MzVjM2Y0ZTVjYjQ5NmFkZDYwNmNmNTcxYWZkqfnZcw==: --dhchap-ctrl-secret DHHC-1:03:MWJhM2FjNDRiOGQxZjUzYTQ4OTlkMzQyYzkyMWEwN2U2YzQ1M2ZkZWRkMTEzYTRiZmMwMjc3ZjNhNTg0ZTAwN+fY0rE=: 00:11:52.222 10:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:52.222 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:52.223 10:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:11:52.223 10:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.223 10:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.223 10:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.223 10:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:52.223 10:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:52.223 10:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:52.482 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:11:52.482 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:52.482 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:52.482 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:52.482 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:52.482 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:52.482 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:52.482 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.482 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.482 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.482 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:52.482 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:52.740 00:11:52.740 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:52.740 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:52.740 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:53.307 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:53.307 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:53.307 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.307 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.307 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.307 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:53.307 { 00:11:53.307 "cntlid": 115, 00:11:53.307 "qid": 0, 00:11:53.307 "state": "enabled", 00:11:53.307 "thread": "nvmf_tgt_poll_group_000", 00:11:53.307 "listen_address": { 00:11:53.307 "trtype": "TCP", 00:11:53.307 "adrfam": "IPv4", 00:11:53.307 "traddr": "10.0.0.2", 00:11:53.307 "trsvcid": "4420" 00:11:53.307 }, 00:11:53.307 "peer_address": { 00:11:53.307 "trtype": "TCP", 00:11:53.307 "adrfam": "IPv4", 00:11:53.307 "traddr": "10.0.0.1", 00:11:53.307 "trsvcid": "59438" 00:11:53.307 }, 00:11:53.307 "auth": { 00:11:53.307 "state": "completed", 00:11:53.307 "digest": "sha512", 00:11:53.307 "dhgroup": "ffdhe3072" 00:11:53.307 } 00:11:53.307 } 00:11:53.307 ]' 00:11:53.307 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:53.307 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:53.308 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:53.308 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:53.308 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:53.308 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:53.308 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:53.308 10:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:53.565 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:01:OGU3MTNlZDVjMDQ3NTFlNjNmOWQ1ZjM2ZDIzNTY0OGShwSVJ: --dhchap-ctrl-secret DHHC-1:02:N2UwYWNmZTg4NGUwYzQ4NTgyMjFjYjY1M2NjZDU0MzA3OGY2ZmFhYjFkNjdmOGNls9s4JA==: 00:11:54.501 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.501 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.501 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:11:54.501 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.501 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.501 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.501 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:54.501 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:54.501 10:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:54.501 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:11:54.501 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:54.501 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:54.501 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:54.501 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:54.501 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:54.501 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.501 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.501 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.501 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.501 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.501 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.760 00:11:54.760 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:54.760 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:54.760 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.019 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.019 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:55.019 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.019 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.019 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.019 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:55.019 { 00:11:55.019 "cntlid": 117, 00:11:55.019 "qid": 0, 00:11:55.019 "state": "enabled", 00:11:55.019 "thread": "nvmf_tgt_poll_group_000", 00:11:55.019 "listen_address": { 00:11:55.019 "trtype": "TCP", 00:11:55.019 "adrfam": "IPv4", 00:11:55.019 "traddr": "10.0.0.2", 00:11:55.019 "trsvcid": "4420" 00:11:55.019 }, 00:11:55.019 "peer_address": { 00:11:55.019 "trtype": "TCP", 00:11:55.019 "adrfam": "IPv4", 00:11:55.019 "traddr": "10.0.0.1", 00:11:55.019 "trsvcid": "59476" 00:11:55.019 }, 00:11:55.019 "auth": { 00:11:55.019 "state": "completed", 00:11:55.019 "digest": "sha512", 00:11:55.019 "dhgroup": "ffdhe3072" 00:11:55.019 } 00:11:55.019 } 00:11:55.019 ]' 00:11:55.019 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:55.278 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:55.278 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:55.279 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:55.279 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:55.279 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.279 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.279 10:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:55.537 10:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:02:ZmZmNThlNDI1ZjcyM2E1MGEwYTdkYWFjMzk2NmZkNDU3NWEyOTQxMzhhZTcwOTE0jsZ/MQ==: --dhchap-ctrl-secret DHHC-1:01:ZmJiOGM0NDBjYzRkZTQ0MmVlMGYyZjVjZTAyY2ZlOGP2BXJQ: 00:11:56.473 10:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:56.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:56.473 10:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:11:56.473 10:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.473 10:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.473 10:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.473 10:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:56.473 10:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:56.473 10:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:56.473 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:11:56.473 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:56.473 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:56.473 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:56.473 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:56.473 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:56.473 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key3 00:11:56.473 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.474 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.474 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.474 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:56.474 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:57.042 00:11:57.042 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:57.042 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:57.042 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:57.042 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:57.042 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:57.042 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.042 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.302 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.302 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:57.302 { 00:11:57.302 "cntlid": 119, 00:11:57.302 "qid": 0, 00:11:57.302 "state": "enabled", 00:11:57.302 "thread": "nvmf_tgt_poll_group_000", 00:11:57.302 "listen_address": { 00:11:57.302 "trtype": "TCP", 00:11:57.302 "adrfam": "IPv4", 00:11:57.302 "traddr": "10.0.0.2", 00:11:57.302 "trsvcid": "4420" 00:11:57.302 }, 00:11:57.302 "peer_address": { 00:11:57.302 "trtype": "TCP", 00:11:57.302 "adrfam": "IPv4", 00:11:57.302 "traddr": "10.0.0.1", 00:11:57.302 "trsvcid": "59506" 00:11:57.302 }, 00:11:57.302 "auth": { 00:11:57.302 "state": "completed", 00:11:57.302 "digest": "sha512", 00:11:57.302 "dhgroup": "ffdhe3072" 00:11:57.302 } 00:11:57.302 } 00:11:57.302 ]' 00:11:57.302 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:57.302 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:57.302 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:57.302 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:57.302 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:57.302 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:57.302 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:57.302 10:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:57.574 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:03:NTRjNzg5ZWU5OGQ1ZDRiYWJjMzBiZGRhODg3MzhiY2NmYzgzZjEzNTA2YmFiNmI2M2FmYzNhMjE0ODc2Mjg4ZB3NM1A=: 00:11:58.153 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:58.153 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:58.153 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:11:58.153 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.153 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.153 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.153 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:58.153 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:58.153 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:58.153 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:58.412 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:11:58.412 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:58.412 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:58.412 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:58.412 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:58.412 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:58.412 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:58.412 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.412 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.412 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.412 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:58.412 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:58.979 00:11:58.979 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:58.979 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:58.979 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.238 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.238 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.238 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.238 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.238 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.238 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:59.238 { 00:11:59.238 "cntlid": 121, 00:11:59.238 "qid": 0, 00:11:59.238 "state": "enabled", 00:11:59.238 "thread": "nvmf_tgt_poll_group_000", 00:11:59.238 "listen_address": { 00:11:59.238 "trtype": "TCP", 00:11:59.238 "adrfam": "IPv4", 00:11:59.238 "traddr": "10.0.0.2", 00:11:59.238 "trsvcid": "4420" 00:11:59.238 }, 00:11:59.238 "peer_address": { 00:11:59.238 "trtype": "TCP", 00:11:59.238 "adrfam": "IPv4", 00:11:59.238 "traddr": "10.0.0.1", 00:11:59.238 "trsvcid": "48562" 00:11:59.238 }, 00:11:59.238 "auth": { 00:11:59.238 "state": "completed", 00:11:59.238 "digest": "sha512", 00:11:59.238 "dhgroup": "ffdhe4096" 00:11:59.238 } 00:11:59.238 } 00:11:59.238 ]' 00:11:59.238 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:59.238 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:59.238 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:59.238 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:59.238 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:59.238 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:59.238 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:59.238 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:59.805 10:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:00:YTg5ODYxOTlkYjU2YmNlNDliNzI3MzVjM2Y0ZTVjYjQ5NmFkZDYwNmNmNTcxYWZkqfnZcw==: --dhchap-ctrl-secret DHHC-1:03:MWJhM2FjNDRiOGQxZjUzYTQ4OTlkMzQyYzkyMWEwN2U2YzQ1M2ZkZWRkMTEzYTRiZmMwMjc3ZjNhNTg0ZTAwN+fY0rE=: 00:12:00.373 10:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:00.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:00.373 10:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:12:00.373 10:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.373 10:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.373 10:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.373 10:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:00.373 10:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:00.373 10:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:00.631 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:12:00.631 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:00.631 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:00.631 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:00.631 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:00.631 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:00.631 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:00.631 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.631 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.631 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.631 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:00.631 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:00.890 00:12:00.890 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:00.890 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:00.890 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:01.147 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:01.148 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:01.148 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.148 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.148 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.148 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:01.148 { 00:12:01.148 "cntlid": 123, 00:12:01.148 "qid": 0, 00:12:01.148 "state": "enabled", 00:12:01.148 "thread": "nvmf_tgt_poll_group_000", 00:12:01.148 "listen_address": { 00:12:01.148 "trtype": "TCP", 00:12:01.148 "adrfam": "IPv4", 00:12:01.148 "traddr": "10.0.0.2", 00:12:01.148 "trsvcid": "4420" 00:12:01.148 }, 00:12:01.148 "peer_address": { 00:12:01.148 "trtype": "TCP", 00:12:01.148 "adrfam": "IPv4", 00:12:01.148 "traddr": "10.0.0.1", 00:12:01.148 "trsvcid": "48590" 00:12:01.148 }, 00:12:01.148 "auth": { 00:12:01.148 "state": "completed", 00:12:01.148 "digest": "sha512", 00:12:01.148 "dhgroup": "ffdhe4096" 00:12:01.148 } 00:12:01.148 } 00:12:01.148 ]' 00:12:01.148 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:01.405 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:01.406 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:01.406 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:01.406 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:01.406 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:01.406 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:01.406 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:01.663 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:01:OGU3MTNlZDVjMDQ3NTFlNjNmOWQ1ZjM2ZDIzNTY0OGShwSVJ: --dhchap-ctrl-secret DHHC-1:02:N2UwYWNmZTg4NGUwYzQ4NTgyMjFjYjY1M2NjZDU0MzA3OGY2ZmFhYjFkNjdmOGNls9s4JA==: 00:12:02.597 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:02.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:02.598 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:12:02.598 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.598 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.598 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.598 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:02.598 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:02.598 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:02.598 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:12:02.598 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:02.598 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:02.598 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:02.598 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:02.598 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:02.598 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:02.598 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.598 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.598 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.598 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:02.598 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:03.190 00:12:03.190 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:03.190 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:03.190 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:03.449 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:03.449 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:03.449 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.449 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.449 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.449 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:03.449 { 00:12:03.449 "cntlid": 125, 00:12:03.449 "qid": 0, 00:12:03.449 "state": "enabled", 00:12:03.449 "thread": "nvmf_tgt_poll_group_000", 00:12:03.449 "listen_address": { 00:12:03.449 "trtype": "TCP", 00:12:03.449 "adrfam": "IPv4", 00:12:03.449 "traddr": "10.0.0.2", 00:12:03.449 "trsvcid": "4420" 00:12:03.449 }, 00:12:03.449 "peer_address": { 00:12:03.449 "trtype": "TCP", 00:12:03.449 "adrfam": "IPv4", 00:12:03.449 "traddr": "10.0.0.1", 00:12:03.449 "trsvcid": "48622" 00:12:03.449 }, 00:12:03.449 "auth": { 00:12:03.449 "state": "completed", 00:12:03.449 "digest": "sha512", 00:12:03.449 "dhgroup": "ffdhe4096" 00:12:03.449 } 00:12:03.449 } 00:12:03.449 ]' 00:12:03.449 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:03.449 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:03.449 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:03.449 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:03.449 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:03.449 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:03.449 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:03.449 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:03.709 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:02:ZmZmNThlNDI1ZjcyM2E1MGEwYTdkYWFjMzk2NmZkNDU3NWEyOTQxMzhhZTcwOTE0jsZ/MQ==: --dhchap-ctrl-secret DHHC-1:01:ZmJiOGM0NDBjYzRkZTQ0MmVlMGYyZjVjZTAyY2ZlOGP2BXJQ: 00:12:04.644 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:04.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:04.644 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:12:04.644 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.644 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.644 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.644 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:04.644 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:04.644 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:04.644 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:12:04.644 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:04.644 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:04.644 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:04.644 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:04.644 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:04.644 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key3 00:12:04.644 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.644 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.644 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.644 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:04.644 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:05.211 00:12:05.211 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:05.211 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:05.211 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:05.469 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:05.469 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:05.469 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.469 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.469 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.469 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:05.469 { 00:12:05.469 "cntlid": 127, 00:12:05.469 "qid": 0, 00:12:05.469 "state": "enabled", 00:12:05.469 "thread": "nvmf_tgt_poll_group_000", 00:12:05.469 "listen_address": { 00:12:05.469 "trtype": "TCP", 00:12:05.469 "adrfam": "IPv4", 00:12:05.469 "traddr": "10.0.0.2", 00:12:05.469 "trsvcid": "4420" 00:12:05.469 }, 00:12:05.469 "peer_address": { 00:12:05.469 "trtype": "TCP", 00:12:05.469 "adrfam": "IPv4", 00:12:05.469 "traddr": "10.0.0.1", 00:12:05.469 "trsvcid": "48646" 00:12:05.469 }, 00:12:05.469 "auth": { 00:12:05.469 "state": "completed", 00:12:05.469 "digest": "sha512", 00:12:05.469 "dhgroup": "ffdhe4096" 00:12:05.469 } 00:12:05.469 } 00:12:05.469 ]' 00:12:05.469 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:05.469 10:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:05.469 10:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:05.469 10:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:05.469 10:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:05.469 10:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:05.469 10:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:05.469 10:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:05.728 10:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:03:NTRjNzg5ZWU5OGQ1ZDRiYWJjMzBiZGRhODg3MzhiY2NmYzgzZjEzNTA2YmFiNmI2M2FmYzNhMjE0ODc2Mjg4ZB3NM1A=: 00:12:06.663 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:06.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:06.663 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:12:06.663 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.663 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.663 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.663 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:06.663 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:06.663 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:06.663 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:06.663 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:12:06.663 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:06.663 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:06.663 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:06.663 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:06.663 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:06.663 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:06.663 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.663 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.663 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.663 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:06.664 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:07.230 00:12:07.230 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:07.230 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:07.230 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:07.489 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:07.489 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:07.489 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.489 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.489 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.489 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:07.489 { 00:12:07.489 "cntlid": 129, 00:12:07.489 "qid": 0, 00:12:07.489 "state": "enabled", 00:12:07.489 "thread": "nvmf_tgt_poll_group_000", 00:12:07.489 "listen_address": { 00:12:07.489 "trtype": "TCP", 00:12:07.489 "adrfam": "IPv4", 00:12:07.489 "traddr": "10.0.0.2", 00:12:07.489 "trsvcid": "4420" 00:12:07.489 }, 00:12:07.489 "peer_address": { 00:12:07.489 "trtype": "TCP", 00:12:07.489 "adrfam": "IPv4", 00:12:07.489 "traddr": "10.0.0.1", 00:12:07.489 "trsvcid": "48676" 00:12:07.489 }, 00:12:07.489 "auth": { 00:12:07.489 "state": "completed", 00:12:07.489 "digest": "sha512", 00:12:07.489 "dhgroup": "ffdhe6144" 00:12:07.489 } 00:12:07.489 } 00:12:07.489 ]' 00:12:07.489 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:07.748 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:07.748 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:07.748 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:07.748 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:07.748 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:07.748 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:07.748 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:08.006 10:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:00:YTg5ODYxOTlkYjU2YmNlNDliNzI3MzVjM2Y0ZTVjYjQ5NmFkZDYwNmNmNTcxYWZkqfnZcw==: --dhchap-ctrl-secret DHHC-1:03:MWJhM2FjNDRiOGQxZjUzYTQ4OTlkMzQyYzkyMWEwN2U2YzQ1M2ZkZWRkMTEzYTRiZmMwMjc3ZjNhNTg0ZTAwN+fY0rE=: 00:12:08.574 10:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:08.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:08.574 10:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:12:08.574 10:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.574 10:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.574 10:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.574 10:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:08.574 10:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:08.574 10:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:08.833 10:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:12:08.833 10:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:08.833 10:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:08.833 10:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:08.833 10:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:08.833 10:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:08.833 10:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:08.833 10:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.833 10:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.833 10:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.833 10:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:08.833 10:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:09.401 00:12:09.401 10:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:09.401 10:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:09.401 10:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:09.660 10:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:09.660 10:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:09.660 10:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.660 10:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.660 10:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.660 10:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:09.660 { 00:12:09.660 "cntlid": 131, 00:12:09.660 "qid": 0, 00:12:09.660 "state": "enabled", 00:12:09.660 "thread": "nvmf_tgt_poll_group_000", 00:12:09.660 "listen_address": { 00:12:09.660 "trtype": "TCP", 00:12:09.660 "adrfam": "IPv4", 00:12:09.660 "traddr": "10.0.0.2", 00:12:09.660 "trsvcid": "4420" 00:12:09.660 }, 00:12:09.660 "peer_address": { 00:12:09.660 "trtype": "TCP", 00:12:09.660 "adrfam": "IPv4", 00:12:09.660 "traddr": "10.0.0.1", 00:12:09.660 "trsvcid": "55166" 00:12:09.660 }, 00:12:09.660 "auth": { 00:12:09.660 "state": "completed", 00:12:09.660 "digest": "sha512", 00:12:09.660 "dhgroup": "ffdhe6144" 00:12:09.660 } 00:12:09.660 } 00:12:09.660 ]' 00:12:09.660 10:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:09.660 10:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:09.660 10:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:09.660 10:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:09.660 10:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:09.660 10:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:09.660 10:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:09.660 10:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:10.228 10:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:01:OGU3MTNlZDVjMDQ3NTFlNjNmOWQ1ZjM2ZDIzNTY0OGShwSVJ: --dhchap-ctrl-secret DHHC-1:02:N2UwYWNmZTg4NGUwYzQ4NTgyMjFjYjY1M2NjZDU0MzA3OGY2ZmFhYjFkNjdmOGNls9s4JA==: 00:12:10.795 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:10.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:10.795 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:12:10.795 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.795 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.795 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.795 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:10.795 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:10.795 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:11.054 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:12:11.054 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:11.054 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:11.054 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:11.054 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:11.054 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.054 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:11.054 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.054 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.054 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.054 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:11.054 10:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:11.621 00:12:11.621 10:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:11.621 10:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:11.621 10:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:11.879 10:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:11.879 10:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:11.879 10:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.879 10:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.879 10:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.879 10:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:11.879 { 00:12:11.879 "cntlid": 133, 00:12:11.879 "qid": 0, 00:12:11.879 "state": "enabled", 00:12:11.879 "thread": "nvmf_tgt_poll_group_000", 00:12:11.879 "listen_address": { 00:12:11.879 "trtype": "TCP", 00:12:11.879 "adrfam": "IPv4", 00:12:11.880 "traddr": "10.0.0.2", 00:12:11.880 "trsvcid": "4420" 00:12:11.880 }, 00:12:11.880 "peer_address": { 00:12:11.880 "trtype": "TCP", 00:12:11.880 "adrfam": "IPv4", 00:12:11.880 "traddr": "10.0.0.1", 00:12:11.880 "trsvcid": "55198" 00:12:11.880 }, 00:12:11.880 "auth": { 00:12:11.880 "state": "completed", 00:12:11.880 "digest": "sha512", 00:12:11.880 "dhgroup": "ffdhe6144" 00:12:11.880 } 00:12:11.880 } 00:12:11.880 ]' 00:12:11.880 10:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:11.880 10:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:11.880 10:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:12.139 10:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:12.139 10:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:12.139 10:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.139 10:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.139 10:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.397 10:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:02:ZmZmNThlNDI1ZjcyM2E1MGEwYTdkYWFjMzk2NmZkNDU3NWEyOTQxMzhhZTcwOTE0jsZ/MQ==: --dhchap-ctrl-secret DHHC-1:01:ZmJiOGM0NDBjYzRkZTQ0MmVlMGYyZjVjZTAyY2ZlOGP2BXJQ: 00:12:12.965 10:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:12.965 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:12.965 10:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:12:12.965 10:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.965 10:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.965 10:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.965 10:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:12.965 10:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:12.965 10:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:13.224 10:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:12:13.224 10:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:13.224 10:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:13.224 10:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:13.224 10:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:13.224 10:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:13.224 10:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key3 00:12:13.224 10:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.224 10:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.224 10:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.224 10:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:13.224 10:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:13.790 00:12:13.790 10:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:13.790 10:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:13.790 10:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.049 10:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:14.049 10:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:14.049 10:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.049 10:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.049 10:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.049 10:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:14.049 { 00:12:14.049 "cntlid": 135, 00:12:14.049 "qid": 0, 00:12:14.049 "state": "enabled", 00:12:14.049 "thread": "nvmf_tgt_poll_group_000", 00:12:14.049 "listen_address": { 00:12:14.049 "trtype": "TCP", 00:12:14.049 "adrfam": "IPv4", 00:12:14.049 "traddr": "10.0.0.2", 00:12:14.049 "trsvcid": "4420" 00:12:14.049 }, 00:12:14.049 "peer_address": { 00:12:14.049 "trtype": "TCP", 00:12:14.049 "adrfam": "IPv4", 00:12:14.049 "traddr": "10.0.0.1", 00:12:14.049 "trsvcid": "55228" 00:12:14.049 }, 00:12:14.049 "auth": { 00:12:14.049 "state": "completed", 00:12:14.049 "digest": "sha512", 00:12:14.049 "dhgroup": "ffdhe6144" 00:12:14.049 } 00:12:14.049 } 00:12:14.049 ]' 00:12:14.049 10:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:14.049 10:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:14.049 10:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:14.049 10:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:14.049 10:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:14.308 10:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.308 10:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.308 10:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:14.567 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:03:NTRjNzg5ZWU5OGQ1ZDRiYWJjMzBiZGRhODg3MzhiY2NmYzgzZjEzNTA2YmFiNmI2M2FmYzNhMjE0ODc2Mjg4ZB3NM1A=: 00:12:15.134 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:15.134 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:15.134 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:12:15.134 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.134 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.134 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.134 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:15.134 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:15.134 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:15.134 10:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:15.393 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:12:15.393 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:15.393 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:15.393 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:15.393 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:15.393 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:15.393 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:15.393 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.393 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.393 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.393 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:15.394 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:15.962 00:12:15.962 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:15.962 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:15.962 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:16.220 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.220 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:16.220 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.220 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.479 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.479 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:16.479 { 00:12:16.479 "cntlid": 137, 00:12:16.479 "qid": 0, 00:12:16.479 "state": "enabled", 00:12:16.479 "thread": "nvmf_tgt_poll_group_000", 00:12:16.479 "listen_address": { 00:12:16.479 "trtype": "TCP", 00:12:16.479 "adrfam": "IPv4", 00:12:16.479 "traddr": "10.0.0.2", 00:12:16.479 "trsvcid": "4420" 00:12:16.479 }, 00:12:16.479 "peer_address": { 00:12:16.479 "trtype": "TCP", 00:12:16.479 "adrfam": "IPv4", 00:12:16.479 "traddr": "10.0.0.1", 00:12:16.479 "trsvcid": "55270" 00:12:16.479 }, 00:12:16.479 "auth": { 00:12:16.479 "state": "completed", 00:12:16.479 "digest": "sha512", 00:12:16.479 "dhgroup": "ffdhe8192" 00:12:16.479 } 00:12:16.479 } 00:12:16.479 ]' 00:12:16.479 10:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:16.479 10:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:16.479 10:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:16.479 10:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:16.479 10:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:16.479 10:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:16.479 10:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:16.479 10:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:16.738 10:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:00:YTg5ODYxOTlkYjU2YmNlNDliNzI3MzVjM2Y0ZTVjYjQ5NmFkZDYwNmNmNTcxYWZkqfnZcw==: --dhchap-ctrl-secret DHHC-1:03:MWJhM2FjNDRiOGQxZjUzYTQ4OTlkMzQyYzkyMWEwN2U2YzQ1M2ZkZWRkMTEzYTRiZmMwMjc3ZjNhNTg0ZTAwN+fY0rE=: 00:12:17.693 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:17.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:17.693 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:12:17.693 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.693 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.693 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.693 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:17.693 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:17.693 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:17.693 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:12:17.693 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:17.693 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:17.693 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:17.693 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:17.693 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:17.693 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:17.693 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.693 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.693 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.693 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:17.693 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:18.630 00:12:18.630 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:18.630 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:18.630 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:18.630 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:18.630 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:18.630 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.630 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.630 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.630 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:18.630 { 00:12:18.630 "cntlid": 139, 00:12:18.630 "qid": 0, 00:12:18.630 "state": "enabled", 00:12:18.630 "thread": "nvmf_tgt_poll_group_000", 00:12:18.630 "listen_address": { 00:12:18.630 "trtype": "TCP", 00:12:18.630 "adrfam": "IPv4", 00:12:18.630 "traddr": "10.0.0.2", 00:12:18.630 "trsvcid": "4420" 00:12:18.630 }, 00:12:18.630 "peer_address": { 00:12:18.630 "trtype": "TCP", 00:12:18.630 "adrfam": "IPv4", 00:12:18.630 "traddr": "10.0.0.1", 00:12:18.630 "trsvcid": "55312" 00:12:18.630 }, 00:12:18.630 "auth": { 00:12:18.630 "state": "completed", 00:12:18.630 "digest": "sha512", 00:12:18.630 "dhgroup": "ffdhe8192" 00:12:18.630 } 00:12:18.630 } 00:12:18.630 ]' 00:12:18.630 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:18.630 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:18.630 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:18.889 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:18.889 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:18.889 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:18.889 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:18.889 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:19.147 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:01:OGU3MTNlZDVjMDQ3NTFlNjNmOWQ1ZjM2ZDIzNTY0OGShwSVJ: --dhchap-ctrl-secret DHHC-1:02:N2UwYWNmZTg4NGUwYzQ4NTgyMjFjYjY1M2NjZDU0MzA3OGY2ZmFhYjFkNjdmOGNls9s4JA==: 00:12:19.716 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:19.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:19.716 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:12:19.716 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.716 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.716 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.716 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:19.716 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:19.716 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:19.975 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:12:19.975 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:19.975 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:19.976 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:19.976 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:19.976 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:19.976 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:19.976 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.976 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.976 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.976 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:19.976 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:20.542 00:12:20.542 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:20.542 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:20.542 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:20.802 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:20.802 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:20.802 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.802 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.802 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.802 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:20.802 { 00:12:20.802 "cntlid": 141, 00:12:20.802 "qid": 0, 00:12:20.802 "state": "enabled", 00:12:20.802 "thread": "nvmf_tgt_poll_group_000", 00:12:20.802 "listen_address": { 00:12:20.802 "trtype": "TCP", 00:12:20.802 "adrfam": "IPv4", 00:12:20.802 "traddr": "10.0.0.2", 00:12:20.802 "trsvcid": "4420" 00:12:20.802 }, 00:12:20.802 "peer_address": { 00:12:20.802 "trtype": "TCP", 00:12:20.802 "adrfam": "IPv4", 00:12:20.802 "traddr": "10.0.0.1", 00:12:20.802 "trsvcid": "52414" 00:12:20.802 }, 00:12:20.802 "auth": { 00:12:20.802 "state": "completed", 00:12:20.802 "digest": "sha512", 00:12:20.802 "dhgroup": "ffdhe8192" 00:12:20.802 } 00:12:20.802 } 00:12:20.802 ]' 00:12:20.802 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:20.802 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:20.802 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:20.802 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:20.802 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:21.061 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:21.061 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:21.061 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:21.062 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:02:ZmZmNThlNDI1ZjcyM2E1MGEwYTdkYWFjMzk2NmZkNDU3NWEyOTQxMzhhZTcwOTE0jsZ/MQ==: --dhchap-ctrl-secret DHHC-1:01:ZmJiOGM0NDBjYzRkZTQ0MmVlMGYyZjVjZTAyY2ZlOGP2BXJQ: 00:12:21.997 10:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:21.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:21.997 10:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:12:21.997 10:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.997 10:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.997 10:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.998 10:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:21.998 10:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:21.998 10:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:21.998 10:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:12:21.998 10:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:21.998 10:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:21.998 10:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:21.998 10:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:21.998 10:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:21.998 10:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key3 00:12:21.998 10:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.998 10:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.998 10:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.998 10:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:21.998 10:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:22.933 00:12:22.933 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:22.933 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:22.933 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:22.933 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:22.933 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:22.933 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.933 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.933 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.933 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:22.933 { 00:12:22.933 "cntlid": 143, 00:12:22.933 "qid": 0, 00:12:22.933 "state": "enabled", 00:12:22.933 "thread": "nvmf_tgt_poll_group_000", 00:12:22.933 "listen_address": { 00:12:22.933 "trtype": "TCP", 00:12:22.933 "adrfam": "IPv4", 00:12:22.933 "traddr": "10.0.0.2", 00:12:22.933 "trsvcid": "4420" 00:12:22.933 }, 00:12:22.933 "peer_address": { 00:12:22.933 "trtype": "TCP", 00:12:22.933 "adrfam": "IPv4", 00:12:22.933 "traddr": "10.0.0.1", 00:12:22.933 "trsvcid": "52446" 00:12:22.933 }, 00:12:22.933 "auth": { 00:12:22.933 "state": "completed", 00:12:22.933 "digest": "sha512", 00:12:22.933 "dhgroup": "ffdhe8192" 00:12:22.933 } 00:12:22.933 } 00:12:22.933 ]' 00:12:22.933 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:22.933 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:22.933 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:23.191 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:23.191 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:23.191 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:23.191 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:23.191 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:23.450 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:03:NTRjNzg5ZWU5OGQ1ZDRiYWJjMzBiZGRhODg3MzhiY2NmYzgzZjEzNTA2YmFiNmI2M2FmYzNhMjE0ODc2Mjg4ZB3NM1A=: 00:12:24.031 10:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:24.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:24.031 10:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:12:24.031 10:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.031 10:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.031 10:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.031 10:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:12:24.031 10:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:12:24.031 10:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:12:24.031 10:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:24.031 10:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:24.031 10:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:24.289 10:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:12:24.289 10:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:24.289 10:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:24.289 10:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:24.289 10:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:24.289 10:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:24.289 10:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:24.289 10:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.289 10:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.289 10:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.289 10:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:24.289 10:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:24.857 00:12:24.857 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:24.857 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:24.857 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:25.117 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:25.117 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:25.117 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.117 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.117 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.117 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:25.117 { 00:12:25.117 "cntlid": 145, 00:12:25.117 "qid": 0, 00:12:25.117 "state": "enabled", 00:12:25.117 "thread": "nvmf_tgt_poll_group_000", 00:12:25.117 "listen_address": { 00:12:25.117 "trtype": "TCP", 00:12:25.117 "adrfam": "IPv4", 00:12:25.117 "traddr": "10.0.0.2", 00:12:25.117 "trsvcid": "4420" 00:12:25.117 }, 00:12:25.117 "peer_address": { 00:12:25.117 "trtype": "TCP", 00:12:25.117 "adrfam": "IPv4", 00:12:25.117 "traddr": "10.0.0.1", 00:12:25.117 "trsvcid": "52484" 00:12:25.117 }, 00:12:25.117 "auth": { 00:12:25.117 "state": "completed", 00:12:25.117 "digest": "sha512", 00:12:25.117 "dhgroup": "ffdhe8192" 00:12:25.117 } 00:12:25.117 } 00:12:25.117 ]' 00:12:25.117 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:25.117 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:25.117 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:25.117 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:25.117 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:25.375 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:25.375 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:25.375 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:25.633 10:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:00:YTg5ODYxOTlkYjU2YmNlNDliNzI3MzVjM2Y0ZTVjYjQ5NmFkZDYwNmNmNTcxYWZkqfnZcw==: --dhchap-ctrl-secret DHHC-1:03:MWJhM2FjNDRiOGQxZjUzYTQ4OTlkMzQyYzkyMWEwN2U2YzQ1M2ZkZWRkMTEzYTRiZmMwMjc3ZjNhNTg0ZTAwN+fY0rE=: 00:12:26.198 10:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:26.198 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:26.198 10:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:12:26.198 10:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.198 10:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.198 10:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.198 10:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key1 00:12:26.198 10:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.198 10:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.198 10:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.198 10:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:26.198 10:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:26.198 10:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:26.198 10:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:12:26.198 10:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:26.198 10:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:12:26.198 10:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:26.198 10:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:26.198 10:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:26.764 request: 00:12:26.764 { 00:12:26.764 "name": "nvme0", 00:12:26.764 "trtype": "tcp", 00:12:26.764 "traddr": "10.0.0.2", 00:12:26.764 "adrfam": "ipv4", 00:12:26.764 "trsvcid": "4420", 00:12:26.764 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:26.764 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c", 00:12:26.764 "prchk_reftag": false, 00:12:26.764 "prchk_guard": false, 00:12:26.764 "hdgst": false, 00:12:26.764 "ddgst": false, 00:12:26.764 "dhchap_key": "key2", 00:12:26.764 "method": "bdev_nvme_attach_controller", 00:12:26.764 "req_id": 1 00:12:26.764 } 00:12:26.764 Got JSON-RPC error response 00:12:26.764 response: 00:12:26.764 { 00:12:26.764 "code": -5, 00:12:26.764 "message": "Input/output error" 00:12:26.764 } 00:12:26.764 10:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:26.764 10:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:26.764 10:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:26.764 10:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:26.764 10:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:12:26.764 10:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.764 10:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.764 10:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.764 10:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:26.764 10:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.764 10:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.764 10:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.764 10:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:26.764 10:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:26.764 10:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:26.764 10:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:12:26.764 10:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:26.764 10:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:12:26.764 10:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:26.764 10:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:26.765 10:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:27.329 request: 00:12:27.329 { 00:12:27.329 "name": "nvme0", 00:12:27.329 "trtype": "tcp", 00:12:27.329 "traddr": "10.0.0.2", 00:12:27.330 "adrfam": "ipv4", 00:12:27.330 "trsvcid": "4420", 00:12:27.330 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:27.330 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c", 00:12:27.330 "prchk_reftag": false, 00:12:27.330 "prchk_guard": false, 00:12:27.330 "hdgst": false, 00:12:27.330 "ddgst": false, 00:12:27.330 "dhchap_key": "key1", 00:12:27.330 "dhchap_ctrlr_key": "ckey2", 00:12:27.330 "method": "bdev_nvme_attach_controller", 00:12:27.330 "req_id": 1 00:12:27.330 } 00:12:27.330 Got JSON-RPC error response 00:12:27.330 response: 00:12:27.330 { 00:12:27.330 "code": -5, 00:12:27.330 "message": "Input/output error" 00:12:27.330 } 00:12:27.330 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:27.330 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:27.330 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:27.330 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:27.330 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:12:27.330 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.330 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.588 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.588 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key1 00:12:27.588 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.588 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.588 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.588 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:27.588 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:27.588 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:27.588 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:12:27.588 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:27.588 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:12:27.588 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:27.588 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:27.588 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:28.154 request: 00:12:28.155 { 00:12:28.155 "name": "nvme0", 00:12:28.155 "trtype": "tcp", 00:12:28.155 "traddr": "10.0.0.2", 00:12:28.155 "adrfam": "ipv4", 00:12:28.155 "trsvcid": "4420", 00:12:28.155 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:28.155 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c", 00:12:28.155 "prchk_reftag": false, 00:12:28.155 "prchk_guard": false, 00:12:28.155 "hdgst": false, 00:12:28.155 "ddgst": false, 00:12:28.155 "dhchap_key": "key1", 00:12:28.155 "dhchap_ctrlr_key": "ckey1", 00:12:28.155 "method": "bdev_nvme_attach_controller", 00:12:28.155 "req_id": 1 00:12:28.155 } 00:12:28.155 Got JSON-RPC error response 00:12:28.155 response: 00:12:28.155 { 00:12:28.155 "code": -5, 00:12:28.155 "message": "Input/output error" 00:12:28.155 } 00:12:28.155 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:28.155 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:28.155 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:28.155 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:28.155 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:12:28.155 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.155 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.155 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.155 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 68612 00:12:28.155 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 68612 ']' 00:12:28.155 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 68612 00:12:28.155 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:12:28.155 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:28.155 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68612 00:12:28.155 killing process with pid 68612 00:12:28.155 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:28.155 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:28.155 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68612' 00:12:28.155 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 68612 00:12:28.155 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 68612 00:12:28.412 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:12:28.413 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:28.413 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:28.413 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.413 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=71679 00:12:28.413 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 71679 00:12:28.413 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:12:28.413 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 71679 ']' 00:12:28.413 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.413 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:28.413 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.413 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:28.413 10:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.348 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:29.348 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:12:29.348 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:29.348 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:29.348 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.348 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:29.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.348 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:29.348 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 71679 00:12:29.348 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 71679 ']' 00:12:29.348 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.348 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:29.348 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.348 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:29.348 10:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.606 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:29.606 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:12:29.606 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:12:29.606 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.607 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.865 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.865 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:12:29.865 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:29.865 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:29.865 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:29.865 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:29.865 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:29.865 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key3 00:12:29.865 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.865 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.865 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.865 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:29.865 10:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:30.432 00:12:30.432 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:30.432 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:30.432 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:30.690 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:30.690 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:30.690 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.690 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.690 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.690 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:30.690 { 00:12:30.691 "cntlid": 1, 00:12:30.691 "qid": 0, 00:12:30.691 "state": "enabled", 00:12:30.691 "thread": "nvmf_tgt_poll_group_000", 00:12:30.691 "listen_address": { 00:12:30.691 "trtype": "TCP", 00:12:30.691 "adrfam": "IPv4", 00:12:30.691 "traddr": "10.0.0.2", 00:12:30.691 "trsvcid": "4420" 00:12:30.691 }, 00:12:30.691 "peer_address": { 00:12:30.691 "trtype": "TCP", 00:12:30.691 "adrfam": "IPv4", 00:12:30.691 "traddr": "10.0.0.1", 00:12:30.691 "trsvcid": "43084" 00:12:30.691 }, 00:12:30.691 "auth": { 00:12:30.691 "state": "completed", 00:12:30.691 "digest": "sha512", 00:12:30.691 "dhgroup": "ffdhe8192" 00:12:30.691 } 00:12:30.691 } 00:12:30.691 ]' 00:12:30.691 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:30.691 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:30.691 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:30.691 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:30.691 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:30.691 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:30.691 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:30.691 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:31.262 10:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-secret DHHC-1:03:NTRjNzg5ZWU5OGQ1ZDRiYWJjMzBiZGRhODg3MzhiY2NmYzgzZjEzNTA2YmFiNmI2M2FmYzNhMjE0ODc2Mjg4ZB3NM1A=: 00:12:31.830 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:31.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:31.830 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:12:31.830 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.830 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.830 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.830 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --dhchap-key key3 00:12:31.830 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.830 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.830 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.830 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:12:31.830 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:12:32.089 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:32.089 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:32.089 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:32.089 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:12:32.089 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:32.089 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:12:32.089 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:32.089 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:32.089 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:32.348 request: 00:12:32.348 { 00:12:32.348 "name": "nvme0", 00:12:32.349 "trtype": "tcp", 00:12:32.349 "traddr": "10.0.0.2", 00:12:32.349 "adrfam": "ipv4", 00:12:32.349 "trsvcid": "4420", 00:12:32.349 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:32.349 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c", 00:12:32.349 "prchk_reftag": false, 00:12:32.349 "prchk_guard": false, 00:12:32.349 "hdgst": false, 00:12:32.349 "ddgst": false, 00:12:32.349 "dhchap_key": "key3", 00:12:32.349 "method": "bdev_nvme_attach_controller", 00:12:32.349 "req_id": 1 00:12:32.349 } 00:12:32.349 Got JSON-RPC error response 00:12:32.349 response: 00:12:32.349 { 00:12:32.349 "code": -5, 00:12:32.349 "message": "Input/output error" 00:12:32.349 } 00:12:32.349 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:32.349 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:32.349 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:32.349 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:32.349 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:12:32.349 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:12:32.349 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:32.349 10:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:32.608 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:32.608 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:32.608 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:32.608 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:12:32.608 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:32.608 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:12:32.608 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:32.608 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:32.608 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:32.608 request: 00:12:32.608 { 00:12:32.608 "name": "nvme0", 00:12:32.608 "trtype": "tcp", 00:12:32.608 "traddr": "10.0.0.2", 00:12:32.608 "adrfam": "ipv4", 00:12:32.608 "trsvcid": "4420", 00:12:32.608 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:32.608 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c", 00:12:32.608 "prchk_reftag": false, 00:12:32.608 "prchk_guard": false, 00:12:32.608 "hdgst": false, 00:12:32.608 "ddgst": false, 00:12:32.608 "dhchap_key": "key3", 00:12:32.608 "method": "bdev_nvme_attach_controller", 00:12:32.608 "req_id": 1 00:12:32.608 } 00:12:32.608 Got JSON-RPC error response 00:12:32.608 response: 00:12:32.608 { 00:12:32.608 "code": -5, 00:12:32.608 "message": "Input/output error" 00:12:32.608 } 00:12:32.867 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:32.867 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:32.867 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:32.867 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:32.867 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:12:32.867 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:12:32.867 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:12:32.867 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:32.867 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:32.867 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:32.867 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:12:32.867 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.867 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.867 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.867 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:12:32.867 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.867 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.126 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.126 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:33.126 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:33.126 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:33.126 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:12:33.126 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:33.126 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:12:33.126 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:33.126 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:33.126 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:33.126 request: 00:12:33.126 { 00:12:33.126 "name": "nvme0", 00:12:33.126 "trtype": "tcp", 00:12:33.126 "traddr": "10.0.0.2", 00:12:33.126 "adrfam": "ipv4", 00:12:33.126 "trsvcid": "4420", 00:12:33.126 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:33.126 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c", 00:12:33.126 "prchk_reftag": false, 00:12:33.126 "prchk_guard": false, 00:12:33.126 "hdgst": false, 00:12:33.126 "ddgst": false, 00:12:33.126 "dhchap_key": "key0", 00:12:33.126 "dhchap_ctrlr_key": "key1", 00:12:33.126 "method": "bdev_nvme_attach_controller", 00:12:33.126 "req_id": 1 00:12:33.126 } 00:12:33.126 Got JSON-RPC error response 00:12:33.126 response: 00:12:33.126 { 00:12:33.126 "code": -5, 00:12:33.126 "message": "Input/output error" 00:12:33.126 } 00:12:33.127 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:33.127 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:33.127 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:33.127 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:33.127 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:12:33.127 10:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:12:33.386 00:12:33.645 10:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:12:33.645 10:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.645 10:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:12:33.645 10:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.645 10:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:33.645 10:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:33.904 10:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:12:33.904 10:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:12:33.904 10:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 68650 00:12:33.904 10:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 68650 ']' 00:12:33.904 10:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 68650 00:12:33.904 10:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:12:33.904 10:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:33.904 10:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68650 00:12:34.163 killing process with pid 68650 00:12:34.163 10:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:34.163 10:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:34.163 10:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68650' 00:12:34.163 10:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 68650 00:12:34.163 10:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 68650 00:12:34.422 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:12:34.422 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:34.422 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:12:34.422 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:34.422 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:12:34.422 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:34.422 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:34.422 rmmod nvme_tcp 00:12:34.422 rmmod nvme_fabrics 00:12:34.422 rmmod nvme_keyring 00:12:34.422 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:34.422 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:12:34.422 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:12:34.422 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 71679 ']' 00:12:34.422 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 71679 00:12:34.422 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 71679 ']' 00:12:34.422 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 71679 00:12:34.422 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:12:34.422 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:34.422 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71679 00:12:34.422 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:34.422 killing process with pid 71679 00:12:34.422 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:34.422 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71679' 00:12:34.422 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 71679 00:12:34.422 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 71679 00:12:34.681 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:34.681 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:34.681 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:34.681 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:34.681 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:34.681 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.681 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:34.681 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.681 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:34.681 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.BDB /tmp/spdk.key-sha256.YLz /tmp/spdk.key-sha384.Bd9 /tmp/spdk.key-sha512.Bun /tmp/spdk.key-sha512.z1r /tmp/spdk.key-sha384.1GK /tmp/spdk.key-sha256.jwQ '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:12:34.941 00:12:34.941 real 2m51.642s 00:12:34.941 user 6m49.560s 00:12:34.941 sys 0m27.201s 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.941 ************************************ 00:12:34.941 END TEST nvmf_auth_target 00:12:34.941 ************************************ 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:34.941 ************************************ 00:12:34.941 START TEST nvmf_bdevio_no_huge 00:12:34.941 ************************************ 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:34.941 * Looking for test storage... 00:12:34.941 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=bb4b8bd3-cfb4-4368-bf29-91254747069c 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:34.941 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:34.942 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:34.942 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:34.942 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:34.942 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:34.942 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:34.942 Cannot find device "nvmf_tgt_br" 00:12:34.942 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:12:34.942 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:34.942 Cannot find device "nvmf_tgt_br2" 00:12:34.942 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:12:34.942 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:34.942 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:34.942 Cannot find device "nvmf_tgt_br" 00:12:34.942 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:12:34.942 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:34.942 Cannot find device "nvmf_tgt_br2" 00:12:34.942 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:12:34.942 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:35.201 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:35.201 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:35.201 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:35.201 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:12:35.201 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:35.201 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:35.201 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:12:35.201 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:35.201 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:35.201 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:35.201 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:35.201 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:35.201 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:35.201 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:35.201 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:35.201 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:35.201 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:35.201 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:35.201 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:35.201 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:35.201 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:35.201 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:35.201 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:35.201 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:35.201 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:35.201 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:35.201 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:35.202 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:35.202 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:35.202 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:35.202 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:35.202 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:35.202 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:12:35.202 00:12:35.202 --- 10.0.0.2 ping statistics --- 00:12:35.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.202 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:12:35.202 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:35.202 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:35.202 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:12:35.202 00:12:35.202 --- 10.0.0.3 ping statistics --- 00:12:35.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.202 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:12:35.202 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:35.202 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:35.202 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:12:35.202 00:12:35.202 --- 10.0.0.1 ping statistics --- 00:12:35.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.202 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:12:35.202 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:35.202 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:12:35.202 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:35.202 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:35.202 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:35.202 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:35.202 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:35.202 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:35.202 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:35.202 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:35.202 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:35.202 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:35.202 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:35.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.202 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=71996 00:12:35.202 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 71996 00:12:35.202 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 71996 ']' 00:12:35.202 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:12:35.202 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.202 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:35.202 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.202 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:35.202 10:51:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:35.461 [2024-07-25 10:51:04.986058] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:35.461 [2024-07-25 10:51:04.986168] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:12:35.461 [2024-07-25 10:51:05.135879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:35.720 [2024-07-25 10:51:05.327019] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:35.720 [2024-07-25 10:51:05.327313] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:35.720 [2024-07-25 10:51:05.327762] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:35.720 [2024-07-25 10:51:05.328202] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:35.720 [2024-07-25 10:51:05.328478] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:35.720 [2024-07-25 10:51:05.328869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:35.720 [2024-07-25 10:51:05.328947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:12:35.720 [2024-07-25 10:51:05.332891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:12:35.720 [2024-07-25 10:51:05.332907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:35.720 [2024-07-25 10:51:05.337537] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:36.288 10:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:36.288 10:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:12:36.288 10:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:36.288 10:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:36.288 10:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:36.288 10:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:36.288 10:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:36.288 10:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.288 10:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:36.288 [2024-07-25 10:51:05.979790] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:36.288 10:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.288 10:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:36.288 10:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.288 10:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:36.288 Malloc0 00:12:36.288 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.288 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:36.288 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.288 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:36.288 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.288 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:36.288 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.288 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:36.547 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.547 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:36.547 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.547 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:36.547 [2024-07-25 10:51:06.029264] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.547 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.547 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:12:36.547 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:36.547 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:12:36.547 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:12:36.547 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:36.547 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:36.547 { 00:12:36.547 "params": { 00:12:36.547 "name": "Nvme$subsystem", 00:12:36.547 "trtype": "$TEST_TRANSPORT", 00:12:36.547 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:36.547 "adrfam": "ipv4", 00:12:36.547 "trsvcid": "$NVMF_PORT", 00:12:36.547 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:36.547 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:36.547 "hdgst": ${hdgst:-false}, 00:12:36.547 "ddgst": ${ddgst:-false} 00:12:36.547 }, 00:12:36.547 "method": "bdev_nvme_attach_controller" 00:12:36.547 } 00:12:36.547 EOF 00:12:36.547 )") 00:12:36.547 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:12:36.547 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:12:36.547 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:12:36.547 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:36.547 "params": { 00:12:36.547 "name": "Nvme1", 00:12:36.547 "trtype": "tcp", 00:12:36.547 "traddr": "10.0.0.2", 00:12:36.547 "adrfam": "ipv4", 00:12:36.547 "trsvcid": "4420", 00:12:36.547 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:36.547 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:36.547 "hdgst": false, 00:12:36.547 "ddgst": false 00:12:36.547 }, 00:12:36.547 "method": "bdev_nvme_attach_controller" 00:12:36.547 }' 00:12:36.547 [2024-07-25 10:51:06.099453] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:36.547 [2024-07-25 10:51:06.099604] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid72032 ] 00:12:36.547 [2024-07-25 10:51:06.241132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:36.805 [2024-07-25 10:51:06.367701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:36.805 [2024-07-25 10:51:06.367835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:36.805 [2024-07-25 10:51:06.367838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.805 [2024-07-25 10:51:06.381165] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:36.805 I/O targets: 00:12:36.805 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:36.805 00:12:36.805 00:12:36.805 CUnit - A unit testing framework for C - Version 2.1-3 00:12:36.805 http://cunit.sourceforge.net/ 00:12:36.805 00:12:36.805 00:12:36.805 Suite: bdevio tests on: Nvme1n1 00:12:37.064 Test: blockdev write read block ...passed 00:12:37.064 Test: blockdev write zeroes read block ...passed 00:12:37.064 Test: blockdev write zeroes read no split ...passed 00:12:37.064 Test: blockdev write zeroes read split ...passed 00:12:37.064 Test: blockdev write zeroes read split partial ...passed 00:12:37.064 Test: blockdev reset ...[2024-07-25 10:51:06.579345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:12:37.064 [2024-07-25 10:51:06.579643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f3870 (9): Bad file descriptor 00:12:37.064 [2024-07-25 10:51:06.595290] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:37.064 passed 00:12:37.064 Test: blockdev write read 8 blocks ...passed 00:12:37.064 Test: blockdev write read size > 128k ...passed 00:12:37.064 Test: blockdev write read invalid size ...passed 00:12:37.064 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:37.064 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:37.064 Test: blockdev write read max offset ...passed 00:12:37.064 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:37.064 Test: blockdev writev readv 8 blocks ...passed 00:12:37.064 Test: blockdev writev readv 30 x 1block ...passed 00:12:37.064 Test: blockdev writev readv block ...passed 00:12:37.064 Test: blockdev writev readv size > 128k ...passed 00:12:37.064 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:37.064 Test: blockdev comparev and writev ...[2024-07-25 10:51:06.604538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:37.064 [2024-07-25 10:51:06.604705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:37.064 [2024-07-25 10:51:06.604733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:37.064 [2024-07-25 10:51:06.604745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:37.064 [2024-07-25 10:51:06.605080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:37.064 [2024-07-25 10:51:06.605106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:37.064 [2024-07-25 10:51:06.605125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:37.064 [2024-07-25 10:51:06.605135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:37.064 [2024-07-25 10:51:06.605442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:37.064 [2024-07-25 10:51:06.605464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:37.064 [2024-07-25 10:51:06.605481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:37.064 [2024-07-25 10:51:06.605490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:37.064 [2024-07-25 10:51:06.605833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:37.064 [2024-07-25 10:51:06.605865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:37.064 [2024-07-25 10:51:06.605884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:37.064 [2024-07-25 10:51:06.605894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:37.064 passed 00:12:37.064 Test: blockdev nvme passthru rw ...passed 00:12:37.064 Test: blockdev nvme passthru vendor specific ...[2024-07-25 10:51:06.606618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:37.065 [2024-07-25 10:51:06.606647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:37.065 [2024-07-25 10:51:06.606755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:37.065 [2024-07-25 10:51:06.606771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:37.065 passed 00:12:37.065 Test: blockdev nvme admin passthru ...[2024-07-25 10:51:06.606887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:37.065 [2024-07-25 10:51:06.606908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:37.065 [2024-07-25 10:51:06.607011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:37.065 [2024-07-25 10:51:06.607027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:37.065 passed 00:12:37.065 Test: blockdev copy ...passed 00:12:37.065 00:12:37.065 Run Summary: Type Total Ran Passed Failed Inactive 00:12:37.065 suites 1 1 n/a 0 0 00:12:37.065 tests 23 23 23 0 0 00:12:37.065 asserts 152 152 152 0 n/a 00:12:37.065 00:12:37.065 Elapsed time = 0.173 seconds 00:12:37.324 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:37.324 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.324 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:37.324 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.324 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:37.324 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:12:37.324 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:37.324 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:12:37.324 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:37.324 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:12:37.324 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:37.324 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:37.324 rmmod nvme_tcp 00:12:37.584 rmmod nvme_fabrics 00:12:37.584 rmmod nvme_keyring 00:12:37.584 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:37.584 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:12:37.584 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:12:37.584 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 71996 ']' 00:12:37.584 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 71996 00:12:37.584 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 71996 ']' 00:12:37.584 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 71996 00:12:37.584 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:12:37.584 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:37.584 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71996 00:12:37.584 killing process with pid 71996 00:12:37.584 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:12:37.584 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:12:37.584 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71996' 00:12:37.584 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 71996 00:12:37.584 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 71996 00:12:37.851 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:37.851 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:37.851 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:37.851 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:37.851 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:37.851 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.851 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:37.851 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:38.128 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:38.128 ************************************ 00:12:38.128 END TEST nvmf_bdevio_no_huge 00:12:38.128 ************************************ 00:12:38.128 00:12:38.128 real 0m3.133s 00:12:38.128 user 0m10.036s 00:12:38.128 sys 0m1.266s 00:12:38.128 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:38.128 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:38.128 10:51:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:38.128 10:51:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:38.128 10:51:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:38.128 10:51:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:38.128 ************************************ 00:12:38.128 START TEST nvmf_tls 00:12:38.128 ************************************ 00:12:38.128 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:38.128 * Looking for test storage... 00:12:38.128 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:38.128 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:38.128 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:12:38.128 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:38.128 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:38.128 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:38.128 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:38.128 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:38.128 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:38.128 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:38.128 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:38.128 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:38.128 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:38.128 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:12:38.128 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=bb4b8bd3-cfb4-4368-bf29-91254747069c 00:12:38.128 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:38.128 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:38.128 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:38.128 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:38.128 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:38.128 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:38.128 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:38.128 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:38.128 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:38.129 Cannot find device "nvmf_tgt_br" 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # true 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:38.129 Cannot find device "nvmf_tgt_br2" 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # true 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:38.129 Cannot find device "nvmf_tgt_br" 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # true 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:38.129 Cannot find device "nvmf_tgt_br2" 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # true 00:12:38.129 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:38.388 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:38.388 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:38.388 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:38.388 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:12:38.388 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:38.388 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:38.388 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:12:38.388 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:38.388 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:38.388 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:38.388 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:38.388 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:38.388 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:38.388 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:38.388 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:38.388 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:38.388 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:38.388 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:38.388 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:38.388 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:38.388 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:38.388 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:38.388 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:38.388 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:38.388 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:38.388 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:38.388 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:38.388 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:38.389 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:38.389 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:38.389 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:38.389 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:38.389 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:12:38.389 00:12:38.389 --- 10.0.0.2 ping statistics --- 00:12:38.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.389 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:12:38.389 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:38.389 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:38.389 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:12:38.389 00:12:38.389 --- 10.0.0.3 ping statistics --- 00:12:38.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.389 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:12:38.389 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:38.389 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:38.389 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:12:38.389 00:12:38.389 --- 10.0.0.1 ping statistics --- 00:12:38.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.389 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:12:38.389 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:38.389 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:12:38.389 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:38.389 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:38.389 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:38.389 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:38.389 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:38.389 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:38.389 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:38.389 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:12:38.389 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:38.389 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:38.389 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:38.389 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72218 00:12:38.389 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:12:38.389 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72218 00:12:38.389 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72218 ']' 00:12:38.389 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.389 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:38.389 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.389 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:38.389 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:38.647 [2024-07-25 10:51:08.168195] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:38.647 [2024-07-25 10:51:08.168531] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:38.647 [2024-07-25 10:51:08.310585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.905 [2024-07-25 10:51:08.431737] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:38.905 [2024-07-25 10:51:08.431804] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:38.905 [2024-07-25 10:51:08.431818] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:38.905 [2024-07-25 10:51:08.431830] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:38.905 [2024-07-25 10:51:08.431839] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:38.905 [2024-07-25 10:51:08.431904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:39.471 10:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:39.472 10:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:12:39.472 10:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:39.472 10:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:39.472 10:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:39.730 10:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:39.730 10:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:12:39.730 10:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:12:39.989 true 00:12:39.989 10:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:39.989 10:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:12:40.248 10:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:12:40.248 10:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:12:40.248 10:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:40.521 10:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:40.521 10:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:12:40.779 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:12:40.779 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:12:40.779 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:12:41.038 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:41.038 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:12:41.038 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:12:41.038 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:12:41.038 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:41.038 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:12:41.607 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:12:41.607 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:12:41.607 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:12:41.607 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:41.607 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:12:41.866 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:12:41.866 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:12:41.866 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:12:42.125 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:42.125 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:12:42.384 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:12:42.384 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:12:42.384 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:12:42.384 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:12:42.384 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:12:42.384 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:12:42.384 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:12:42.384 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:12:42.384 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:12:42.643 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:42.643 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:12:42.643 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:12:42.643 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:12:42.643 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:12:42.643 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:12:42.643 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:12:42.643 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:12:42.643 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:42.643 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:12:42.643 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.qNPoqiRoUl 00:12:42.643 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:12:42.643 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.6Nd8X6u7xh 00:12:42.643 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:42.643 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:42.643 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.qNPoqiRoUl 00:12:42.643 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.6Nd8X6u7xh 00:12:42.643 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:42.901 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:12:43.160 [2024-07-25 10:51:12.840943] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:43.418 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.qNPoqiRoUl 00:12:43.418 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.qNPoqiRoUl 00:12:43.418 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:43.418 [2024-07-25 10:51:13.114889] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:43.419 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:43.677 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:43.935 [2024-07-25 10:51:13.575003] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:43.935 [2024-07-25 10:51:13.575274] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.935 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:44.193 malloc0 00:12:44.193 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:44.451 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.qNPoqiRoUl 00:12:44.710 [2024-07-25 10:51:14.418915] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:12:44.710 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.qNPoqiRoUl 00:12:56.918 Initializing NVMe Controllers 00:12:56.918 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:56.918 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:56.918 Initialization complete. Launching workers. 00:12:56.918 ======================================================== 00:12:56.918 Latency(us) 00:12:56.918 Device Information : IOPS MiB/s Average min max 00:12:56.918 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9457.76 36.94 6768.83 1003.31 12029.18 00:12:56.918 ======================================================== 00:12:56.918 Total : 9457.76 36.94 6768.83 1003.31 12029.18 00:12:56.918 00:12:56.918 10:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qNPoqiRoUl 00:12:56.918 10:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:56.918 10:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:56.918 10:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:56.918 10:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.qNPoqiRoUl' 00:12:56.918 10:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:56.918 10:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72453 00:12:56.918 10:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:56.918 10:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:56.918 10:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72453 /var/tmp/bdevperf.sock 00:12:56.918 10:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72453 ']' 00:12:56.918 10:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:56.918 10:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:56.918 10:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:56.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:56.918 10:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:56.918 10:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:56.918 [2024-07-25 10:51:24.713732] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:56.918 [2024-07-25 10:51:24.714164] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72453 ] 00:12:56.918 [2024-07-25 10:51:24.865360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.918 [2024-07-25 10:51:24.996817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:56.918 [2024-07-25 10:51:25.055127] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:56.918 10:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:56.918 10:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:12:56.918 10:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.qNPoqiRoUl 00:12:56.918 [2024-07-25 10:51:25.926971] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:56.918 [2024-07-25 10:51:25.927109] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:12:56.918 TLSTESTn1 00:12:56.918 10:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:56.918 Running I/O for 10 seconds... 00:13:06.917 00:13:06.917 Latency(us) 00:13:06.917 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:06.917 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:06.917 Verification LBA range: start 0x0 length 0x2000 00:13:06.917 TLSTESTn1 : 10.02 3832.36 14.97 0.00 0.00 33334.14 8043.05 35508.60 00:13:06.917 =================================================================================================================== 00:13:06.917 Total : 3832.36 14.97 0.00 0.00 33334.14 8043.05 35508.60 00:13:06.917 0 00:13:06.917 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:06.917 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 72453 00:13:06.917 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72453 ']' 00:13:06.917 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72453 00:13:06.917 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:06.917 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:06.917 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72453 00:13:06.917 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:06.917 killing process with pid 72453 00:13:06.917 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:06.917 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72453' 00:13:06.918 Received shutdown signal, test time was about 10.000000 seconds 00:13:06.918 00:13:06.918 Latency(us) 00:13:06.918 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:06.918 =================================================================================================================== 00:13:06.918 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:06.918 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72453 00:13:06.918 [2024-07-25 10:51:36.194040] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:06.918 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72453 00:13:06.918 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6Nd8X6u7xh 00:13:06.918 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:06.918 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6Nd8X6u7xh 00:13:06.918 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:06.918 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:06.918 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:06.918 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:06.918 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6Nd8X6u7xh 00:13:06.918 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:06.918 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:06.918 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:06.918 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.6Nd8X6u7xh' 00:13:06.918 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:06.918 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72582 00:13:06.918 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:06.918 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:06.918 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72582 /var/tmp/bdevperf.sock 00:13:06.918 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72582 ']' 00:13:06.918 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:06.918 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:06.918 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:06.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:06.918 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:06.918 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:06.918 [2024-07-25 10:51:36.483378] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:06.918 [2024-07-25 10:51:36.483841] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72582 ] 00:13:06.918 [2024-07-25 10:51:36.624291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.177 [2024-07-25 10:51:36.736592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:07.177 [2024-07-25 10:51:36.793169] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:07.745 10:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:07.745 10:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:07.745 10:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6Nd8X6u7xh 00:13:08.020 [2024-07-25 10:51:37.702273] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:08.020 [2024-07-25 10:51:37.702693] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:08.020 [2024-07-25 10:51:37.707952] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:08.020 [2024-07-25 10:51:37.708533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b81f0 (107): Transport endpoint is not connected 00:13:08.020 [2024-07-25 10:51:37.709517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b81f0 (9): Bad file descriptor 00:13:08.020 [2024-07-25 10:51:37.710513] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:08.020 [2024-07-25 10:51:37.710543] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:08.020 [2024-07-25 10:51:37.710559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:08.020 request: 00:13:08.020 { 00:13:08.020 "name": "TLSTEST", 00:13:08.020 "trtype": "tcp", 00:13:08.020 "traddr": "10.0.0.2", 00:13:08.020 "adrfam": "ipv4", 00:13:08.020 "trsvcid": "4420", 00:13:08.020 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:08.020 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:08.020 "prchk_reftag": false, 00:13:08.020 "prchk_guard": false, 00:13:08.020 "hdgst": false, 00:13:08.020 "ddgst": false, 00:13:08.020 "psk": "/tmp/tmp.6Nd8X6u7xh", 00:13:08.020 "method": "bdev_nvme_attach_controller", 00:13:08.020 "req_id": 1 00:13:08.020 } 00:13:08.020 Got JSON-RPC error response 00:13:08.020 response: 00:13:08.020 { 00:13:08.020 "code": -5, 00:13:08.020 "message": "Input/output error" 00:13:08.020 } 00:13:08.020 10:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 72582 00:13:08.020 10:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72582 ']' 00:13:08.020 10:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72582 00:13:08.020 10:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:08.020 10:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:08.020 10:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72582 00:13:08.290 killing process with pid 72582 00:13:08.290 Received shutdown signal, test time was about 10.000000 seconds 00:13:08.290 00:13:08.290 Latency(us) 00:13:08.290 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:08.290 =================================================================================================================== 00:13:08.290 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:08.290 10:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:08.290 10:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:08.290 10:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72582' 00:13:08.290 10:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72582 00:13:08.290 [2024-07-25 10:51:37.766458] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:08.290 10:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72582 00:13:08.290 10:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:08.290 10:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:08.290 10:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:08.291 10:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:08.291 10:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:08.291 10:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.qNPoqiRoUl 00:13:08.291 10:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:08.291 10:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.qNPoqiRoUl 00:13:08.291 10:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:08.291 10:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:08.291 10:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:08.291 10:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:08.291 10:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.qNPoqiRoUl 00:13:08.291 10:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:08.291 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:08.291 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:13:08.291 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.qNPoqiRoUl' 00:13:08.291 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:08.291 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72610 00:13:08.291 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:08.291 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:08.291 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72610 /var/tmp/bdevperf.sock 00:13:08.291 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72610 ']' 00:13:08.291 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:08.291 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:08.291 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:08.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:08.291 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:08.291 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:08.550 [2024-07-25 10:51:38.049344] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:08.550 [2024-07-25 10:51:38.049437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72610 ] 00:13:08.550 [2024-07-25 10:51:38.185825] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.809 [2024-07-25 10:51:38.290551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:08.809 [2024-07-25 10:51:38.345603] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:09.377 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:09.377 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:09.377 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.qNPoqiRoUl 00:13:09.635 [2024-07-25 10:51:39.263279] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:09.635 [2024-07-25 10:51:39.263694] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:09.635 [2024-07-25 10:51:39.275178] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:09.635 [2024-07-25 10:51:39.275379] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:09.635 [2024-07-25 10:51:39.275564] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spd[2024-07-25 10:51:39.275693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e41f0 (107): Transport endpoint is not connected 00:13:09.635 k_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:09.636 [2024-07-25 10:51:39.276682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e41f0 (9): Bad file descriptor 00:13:09.636 [2024-07-25 10:51:39.277680] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:09.636 [2024-07-25 10:51:39.277838] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:09.636 [2024-07-25 10:51:39.277980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:09.636 request: 00:13:09.636 { 00:13:09.636 "name": "TLSTEST", 00:13:09.636 "trtype": "tcp", 00:13:09.636 "traddr": "10.0.0.2", 00:13:09.636 "adrfam": "ipv4", 00:13:09.636 "trsvcid": "4420", 00:13:09.636 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:09.636 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:13:09.636 "prchk_reftag": false, 00:13:09.636 "prchk_guard": false, 00:13:09.636 "hdgst": false, 00:13:09.636 "ddgst": false, 00:13:09.636 "psk": "/tmp/tmp.qNPoqiRoUl", 00:13:09.636 "method": "bdev_nvme_attach_controller", 00:13:09.636 "req_id": 1 00:13:09.636 } 00:13:09.636 Got JSON-RPC error response 00:13:09.636 response: 00:13:09.636 { 00:13:09.636 "code": -5, 00:13:09.636 "message": "Input/output error" 00:13:09.636 } 00:13:09.636 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 72610 00:13:09.636 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72610 ']' 00:13:09.636 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72610 00:13:09.636 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:09.636 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:09.636 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72610 00:13:09.636 killing process with pid 72610 00:13:09.636 Received shutdown signal, test time was about 10.000000 seconds 00:13:09.636 00:13:09.636 Latency(us) 00:13:09.636 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:09.636 =================================================================================================================== 00:13:09.636 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:09.636 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:09.636 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:09.636 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72610' 00:13:09.636 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72610 00:13:09.636 [2024-07-25 10:51:39.326127] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:09.636 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72610 00:13:09.895 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:09.895 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:09.895 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:09.895 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:09.895 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:09.895 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.qNPoqiRoUl 00:13:09.895 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:09.895 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.qNPoqiRoUl 00:13:09.895 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:09.895 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:09.895 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:09.895 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:09.895 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.qNPoqiRoUl 00:13:09.895 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:09.895 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:13:09.895 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:09.895 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.qNPoqiRoUl' 00:13:09.895 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:09.895 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72636 00:13:09.895 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:09.895 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:09.895 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72636 /var/tmp/bdevperf.sock 00:13:09.895 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72636 ']' 00:13:09.895 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:09.895 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:09.895 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:09.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:09.895 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:09.895 10:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:09.895 [2024-07-25 10:51:39.608290] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:09.895 [2024-07-25 10:51:39.608647] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72636 ] 00:13:10.154 [2024-07-25 10:51:39.747942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.154 [2024-07-25 10:51:39.853790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:10.413 [2024-07-25 10:51:39.914023] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:10.981 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:10.981 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:10.981 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.qNPoqiRoUl 00:13:11.240 [2024-07-25 10:51:40.861707] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:11.241 [2024-07-25 10:51:40.861832] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:11.241 [2024-07-25 10:51:40.866869] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:11.241 [2024-07-25 10:51:40.866916] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:11.241 [2024-07-25 10:51:40.866968] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:11.241 [2024-07-25 10:51:40.867548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1da91f0 (107): Transport endpoint is not connected 00:13:11.241 [2024-07-25 10:51:40.868535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1da91f0 (9): Bad file descriptor 00:13:11.241 [2024-07-25 10:51:40.869532] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:13:11.241 [2024-07-25 10:51:40.869557] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:11.241 [2024-07-25 10:51:40.869571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:13:11.241 request: 00:13:11.241 { 00:13:11.241 "name": "TLSTEST", 00:13:11.241 "trtype": "tcp", 00:13:11.241 "traddr": "10.0.0.2", 00:13:11.241 "adrfam": "ipv4", 00:13:11.241 "trsvcid": "4420", 00:13:11.241 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:13:11.241 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:11.241 "prchk_reftag": false, 00:13:11.241 "prchk_guard": false, 00:13:11.241 "hdgst": false, 00:13:11.241 "ddgst": false, 00:13:11.241 "psk": "/tmp/tmp.qNPoqiRoUl", 00:13:11.241 "method": "bdev_nvme_attach_controller", 00:13:11.241 "req_id": 1 00:13:11.241 } 00:13:11.241 Got JSON-RPC error response 00:13:11.241 response: 00:13:11.241 { 00:13:11.241 "code": -5, 00:13:11.241 "message": "Input/output error" 00:13:11.241 } 00:13:11.241 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 72636 00:13:11.241 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72636 ']' 00:13:11.241 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72636 00:13:11.241 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:11.241 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:11.241 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72636 00:13:11.241 killing process with pid 72636 00:13:11.241 Received shutdown signal, test time was about 10.000000 seconds 00:13:11.241 00:13:11.241 Latency(us) 00:13:11.241 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:11.241 =================================================================================================================== 00:13:11.241 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:11.241 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:11.241 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:11.241 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72636' 00:13:11.241 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72636 00:13:11.241 [2024-07-25 10:51:40.917743] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:11.241 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72636 00:13:11.500 10:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:11.500 10:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:11.500 10:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:11.500 10:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:11.500 10:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:11.500 10:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:11.500 10:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:11.500 10:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:11.500 10:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:11.500 10:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:11.500 10:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:11.500 10:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:11.500 10:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:11.500 10:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:11.500 10:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:11.500 10:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:11.500 10:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:13:11.500 10:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:11.500 10:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72665 00:13:11.500 10:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:11.500 10:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:11.500 10:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72665 /var/tmp/bdevperf.sock 00:13:11.500 10:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72665 ']' 00:13:11.500 10:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:11.500 10:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:11.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:11.500 10:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:11.500 10:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:11.500 10:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:11.500 [2024-07-25 10:51:41.206544] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:11.500 [2024-07-25 10:51:41.206639] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72665 ] 00:13:11.760 [2024-07-25 10:51:41.342034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.760 [2024-07-25 10:51:41.441856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:12.018 [2024-07-25 10:51:41.499960] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:12.586 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:12.586 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:12.586 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:13:12.845 [2024-07-25 10:51:42.485855] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:12.845 [2024-07-25 10:51:42.487725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fdbc00 (9): Bad file descriptor 00:13:12.845 [2024-07-25 10:51:42.488719] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:12.845 [2024-07-25 10:51:42.488747] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:12.845 [2024-07-25 10:51:42.488761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:12.845 request: 00:13:12.845 { 00:13:12.845 "name": "TLSTEST", 00:13:12.845 "trtype": "tcp", 00:13:12.845 "traddr": "10.0.0.2", 00:13:12.845 "adrfam": "ipv4", 00:13:12.845 "trsvcid": "4420", 00:13:12.845 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:12.845 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:12.845 "prchk_reftag": false, 00:13:12.845 "prchk_guard": false, 00:13:12.845 "hdgst": false, 00:13:12.845 "ddgst": false, 00:13:12.845 "method": "bdev_nvme_attach_controller", 00:13:12.845 "req_id": 1 00:13:12.845 } 00:13:12.845 Got JSON-RPC error response 00:13:12.845 response: 00:13:12.845 { 00:13:12.845 "code": -5, 00:13:12.845 "message": "Input/output error" 00:13:12.845 } 00:13:12.845 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 72665 00:13:12.845 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72665 ']' 00:13:12.845 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72665 00:13:12.845 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:12.845 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:12.845 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72665 00:13:12.845 killing process with pid 72665 00:13:12.845 Received shutdown signal, test time was about 10.000000 seconds 00:13:12.845 00:13:12.845 Latency(us) 00:13:12.845 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:12.845 =================================================================================================================== 00:13:12.845 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:12.845 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:12.845 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:12.845 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72665' 00:13:12.845 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72665 00:13:12.845 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72665 00:13:13.105 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:13.105 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:13.105 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:13.105 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:13.105 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:13.105 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 72218 00:13:13.105 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72218 ']' 00:13:13.105 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72218 00:13:13.105 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:13.105 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:13.105 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72218 00:13:13.105 killing process with pid 72218 00:13:13.105 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:13.105 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:13.105 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72218' 00:13:13.105 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72218 00:13:13.105 [2024-07-25 10:51:42.801985] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:13.105 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72218 00:13:13.363 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:13:13.363 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:13:13.363 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:13.363 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:13.363 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:13:13.363 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:13:13.364 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:13.622 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:13.622 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:13:13.622 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.1sgGCHDUzq 00:13:13.622 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:13.622 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.1sgGCHDUzq 00:13:13.622 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:13:13.622 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:13.622 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:13.622 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:13.622 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72707 00:13:13.622 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:13.622 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72707 00:13:13.622 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72707 ']' 00:13:13.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.622 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.622 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:13.622 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.622 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:13.622 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:13.622 [2024-07-25 10:51:43.176381] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:13.622 [2024-07-25 10:51:43.176656] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:13.622 [2024-07-25 10:51:43.312022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.880 [2024-07-25 10:51:43.425203] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:13.880 [2024-07-25 10:51:43.425498] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:13.880 [2024-07-25 10:51:43.425518] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:13.880 [2024-07-25 10:51:43.425528] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:13.880 [2024-07-25 10:51:43.425535] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:13.880 [2024-07-25 10:51:43.425570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:13.880 [2024-07-25 10:51:43.482761] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:14.447 10:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:14.447 10:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:14.447 10:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:14.447 10:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:14.447 10:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:14.706 10:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:14.706 10:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.1sgGCHDUzq 00:13:14.706 10:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.1sgGCHDUzq 00:13:14.706 10:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:14.706 [2024-07-25 10:51:44.402277] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:14.706 10:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:14.965 10:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:15.223 [2024-07-25 10:51:44.930354] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:15.223 [2024-07-25 10:51:44.930626] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.223 10:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:15.498 malloc0 00:13:15.498 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:15.782 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1sgGCHDUzq 00:13:16.041 [2024-07-25 10:51:45.703248] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:16.041 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1sgGCHDUzq 00:13:16.041 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:16.041 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:16.041 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:16.041 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.1sgGCHDUzq' 00:13:16.041 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:16.041 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72759 00:13:16.041 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:16.041 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:16.041 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72759 /var/tmp/bdevperf.sock 00:13:16.041 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72759 ']' 00:13:16.041 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:16.041 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:16.041 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:16.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:16.041 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:16.041 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:16.300 [2024-07-25 10:51:45.783434] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:16.300 [2024-07-25 10:51:45.783725] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72759 ] 00:13:16.300 [2024-07-25 10:51:45.925363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.559 [2024-07-25 10:51:46.049777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:16.559 [2024-07-25 10:51:46.109146] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:17.126 10:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:17.126 10:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:17.126 10:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1sgGCHDUzq 00:13:17.385 [2024-07-25 10:51:47.033794] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:17.385 [2024-07-25 10:51:47.034005] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:17.385 TLSTESTn1 00:13:17.644 10:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:17.644 Running I/O for 10 seconds... 00:13:27.639 00:13:27.640 Latency(us) 00:13:27.640 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:27.640 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:27.640 Verification LBA range: start 0x0 length 0x2000 00:13:27.640 TLSTESTn1 : 10.02 4004.55 15.64 0.00 0.00 31904.29 6345.08 33840.41 00:13:27.640 =================================================================================================================== 00:13:27.640 Total : 4004.55 15.64 0.00 0.00 31904.29 6345.08 33840.41 00:13:27.640 0 00:13:27.640 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:27.640 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 72759 00:13:27.640 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72759 ']' 00:13:27.640 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72759 00:13:27.640 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:27.640 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:27.640 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72759 00:13:27.640 killing process with pid 72759 00:13:27.640 Received shutdown signal, test time was about 10.000000 seconds 00:13:27.640 00:13:27.640 Latency(us) 00:13:27.640 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:27.640 =================================================================================================================== 00:13:27.640 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:27.640 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:27.640 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:27.640 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72759' 00:13:27.640 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72759 00:13:27.640 [2024-07-25 10:51:57.318067] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:27.640 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72759 00:13:27.899 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.1sgGCHDUzq 00:13:27.899 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1sgGCHDUzq 00:13:27.899 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:27.899 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1sgGCHDUzq 00:13:27.899 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:27.899 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:27.899 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:27.899 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:27.899 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1sgGCHDUzq 00:13:27.899 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:27.899 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:27.899 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:27.899 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.1sgGCHDUzq' 00:13:27.899 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:27.899 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72892 00:13:27.899 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:27.899 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:27.899 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72892 /var/tmp/bdevperf.sock 00:13:27.899 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72892 ']' 00:13:27.899 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:27.899 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:27.899 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:27.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:27.899 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:27.899 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:27.899 [2024-07-25 10:51:57.599996] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:27.899 [2024-07-25 10:51:57.600420] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72892 ] 00:13:28.158 [2024-07-25 10:51:57.737700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.158 [2024-07-25 10:51:57.841017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:28.158 [2024-07-25 10:51:57.895275] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:29.094 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:29.094 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:29.094 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1sgGCHDUzq 00:13:29.094 [2024-07-25 10:51:58.761822] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:29.094 [2024-07-25 10:51:58.761953] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:13:29.095 [2024-07-25 10:51:58.761991] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.1sgGCHDUzq 00:13:29.095 request: 00:13:29.095 { 00:13:29.095 "name": "TLSTEST", 00:13:29.095 "trtype": "tcp", 00:13:29.095 "traddr": "10.0.0.2", 00:13:29.095 "adrfam": "ipv4", 00:13:29.095 "trsvcid": "4420", 00:13:29.095 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:29.095 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:29.095 "prchk_reftag": false, 00:13:29.095 "prchk_guard": false, 00:13:29.095 "hdgst": false, 00:13:29.095 "ddgst": false, 00:13:29.095 "psk": "/tmp/tmp.1sgGCHDUzq", 00:13:29.095 "method": "bdev_nvme_attach_controller", 00:13:29.095 "req_id": 1 00:13:29.095 } 00:13:29.095 Got JSON-RPC error response 00:13:29.095 response: 00:13:29.095 { 00:13:29.095 "code": -1, 00:13:29.095 "message": "Operation not permitted" 00:13:29.095 } 00:13:29.095 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 72892 00:13:29.095 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72892 ']' 00:13:29.095 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72892 00:13:29.095 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:29.095 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:29.095 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72892 00:13:29.095 killing process with pid 72892 00:13:29.095 Received shutdown signal, test time was about 10.000000 seconds 00:13:29.095 00:13:29.095 Latency(us) 00:13:29.095 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:29.095 =================================================================================================================== 00:13:29.095 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:29.095 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:29.095 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:29.095 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72892' 00:13:29.095 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72892 00:13:29.095 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72892 00:13:29.352 10:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:29.352 10:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:29.352 10:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:29.352 10:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:29.352 10:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:29.352 10:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 72707 00:13:29.352 10:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72707 ']' 00:13:29.352 10:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72707 00:13:29.352 10:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:29.352 10:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:29.352 10:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72707 00:13:29.611 killing process with pid 72707 00:13:29.611 10:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:29.611 10:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:29.611 10:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72707' 00:13:29.611 10:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72707 00:13:29.611 [2024-07-25 10:51:59.090844] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:29.611 10:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72707 00:13:29.869 10:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:13:29.869 10:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:29.869 10:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:29.869 10:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:29.869 10:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72930 00:13:29.869 10:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:29.869 10:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72930 00:13:29.869 10:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72930 ']' 00:13:29.869 10:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.869 10:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:29.869 10:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.869 10:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:29.869 10:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:29.869 [2024-07-25 10:51:59.459698] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:29.869 [2024-07-25 10:51:59.459795] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:29.869 [2024-07-25 10:51:59.593678] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.128 [2024-07-25 10:51:59.699663] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:30.128 [2024-07-25 10:51:59.699726] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:30.128 [2024-07-25 10:51:59.699753] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:30.128 [2024-07-25 10:51:59.699761] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:30.128 [2024-07-25 10:51:59.699769] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:30.128 [2024-07-25 10:51:59.699826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:30.128 [2024-07-25 10:51:59.775954] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:30.695 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:30.695 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:30.695 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:30.695 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:30.695 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:30.954 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:30.954 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.1sgGCHDUzq 00:13:30.954 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:30.954 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.1sgGCHDUzq 00:13:30.954 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:13:30.954 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:30.954 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:13:30.954 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:30.955 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.1sgGCHDUzq 00:13:30.955 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.1sgGCHDUzq 00:13:30.955 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:30.955 [2024-07-25 10:52:00.646459] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:30.955 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:31.222 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:31.492 [2024-07-25 10:52:01.134613] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:31.492 [2024-07-25 10:52:01.134841] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:31.492 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:31.750 malloc0 00:13:31.750 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:32.009 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1sgGCHDUzq 00:13:32.268 [2024-07-25 10:52:01.861112] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:13:32.268 [2024-07-25 10:52:01.861173] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:13:32.268 [2024-07-25 10:52:01.861225] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:13:32.268 request: 00:13:32.268 { 00:13:32.268 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:32.268 "host": "nqn.2016-06.io.spdk:host1", 00:13:32.268 "psk": "/tmp/tmp.1sgGCHDUzq", 00:13:32.268 "method": "nvmf_subsystem_add_host", 00:13:32.268 "req_id": 1 00:13:32.268 } 00:13:32.268 Got JSON-RPC error response 00:13:32.268 response: 00:13:32.268 { 00:13:32.268 "code": -32603, 00:13:32.268 "message": "Internal error" 00:13:32.268 } 00:13:32.268 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:32.268 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:32.268 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:32.268 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:32.268 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 72930 00:13:32.268 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72930 ']' 00:13:32.268 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72930 00:13:32.268 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:32.268 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:32.268 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72930 00:13:32.268 killing process with pid 72930 00:13:32.268 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:32.268 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:32.268 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72930' 00:13:32.268 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72930 00:13:32.268 10:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72930 00:13:32.526 10:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.1sgGCHDUzq 00:13:32.526 10:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:13:32.526 10:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:32.526 10:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:32.526 10:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:32.526 10:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72993 00:13:32.526 10:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:32.526 10:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72993 00:13:32.526 10:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72993 ']' 00:13:32.526 10:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.527 10:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:32.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.527 10:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.527 10:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:32.527 10:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:32.785 [2024-07-25 10:52:02.292845] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:32.785 [2024-07-25 10:52:02.292945] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:32.785 [2024-07-25 10:52:02.427106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.044 [2024-07-25 10:52:02.546374] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:33.044 [2024-07-25 10:52:02.546461] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:33.044 [2024-07-25 10:52:02.546489] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:33.044 [2024-07-25 10:52:02.546498] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:33.044 [2024-07-25 10:52:02.546505] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:33.044 [2024-07-25 10:52:02.546532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:33.044 [2024-07-25 10:52:02.622315] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:33.611 10:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:33.611 10:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:33.611 10:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:33.611 10:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:33.611 10:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:33.611 10:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:33.611 10:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.1sgGCHDUzq 00:13:33.611 10:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.1sgGCHDUzq 00:13:33.611 10:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:33.870 [2024-07-25 10:52:03.508295] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:33.870 10:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:34.128 10:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:34.386 [2024-07-25 10:52:03.988431] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:34.386 [2024-07-25 10:52:03.988685] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:34.386 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:34.645 malloc0 00:13:34.645 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:34.904 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1sgGCHDUzq 00:13:35.164 [2024-07-25 10:52:04.714992] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:35.164 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=73042 00:13:35.164 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:35.164 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:35.164 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 73042 /var/tmp/bdevperf.sock 00:13:35.164 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73042 ']' 00:13:35.164 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:35.164 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:35.164 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:35.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:35.164 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:35.164 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:35.164 [2024-07-25 10:52:04.799888] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:35.164 [2024-07-25 10:52:04.800008] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73042 ] 00:13:35.423 [2024-07-25 10:52:04.943540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:35.423 [2024-07-25 10:52:05.054257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:35.423 [2024-07-25 10:52:05.109578] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:36.355 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:36.355 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:36.355 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1sgGCHDUzq 00:13:36.355 [2024-07-25 10:52:06.074334] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:36.355 [2024-07-25 10:52:06.074498] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:36.612 TLSTESTn1 00:13:36.612 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:36.869 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:13:36.869 "subsystems": [ 00:13:36.869 { 00:13:36.869 "subsystem": "keyring", 00:13:36.869 "config": [] 00:13:36.869 }, 00:13:36.869 { 00:13:36.869 "subsystem": "iobuf", 00:13:36.869 "config": [ 00:13:36.869 { 00:13:36.869 "method": "iobuf_set_options", 00:13:36.869 "params": { 00:13:36.869 "small_pool_count": 8192, 00:13:36.869 "large_pool_count": 1024, 00:13:36.869 "small_bufsize": 8192, 00:13:36.869 "large_bufsize": 135168 00:13:36.869 } 00:13:36.869 } 00:13:36.869 ] 00:13:36.869 }, 00:13:36.869 { 00:13:36.869 "subsystem": "sock", 00:13:36.869 "config": [ 00:13:36.869 { 00:13:36.869 "method": "sock_set_default_impl", 00:13:36.869 "params": { 00:13:36.869 "impl_name": "uring" 00:13:36.869 } 00:13:36.869 }, 00:13:36.869 { 00:13:36.869 "method": "sock_impl_set_options", 00:13:36.869 "params": { 00:13:36.869 "impl_name": "ssl", 00:13:36.870 "recv_buf_size": 4096, 00:13:36.870 "send_buf_size": 4096, 00:13:36.870 "enable_recv_pipe": true, 00:13:36.870 "enable_quickack": false, 00:13:36.870 "enable_placement_id": 0, 00:13:36.870 "enable_zerocopy_send_server": true, 00:13:36.870 "enable_zerocopy_send_client": false, 00:13:36.870 "zerocopy_threshold": 0, 00:13:36.870 "tls_version": 0, 00:13:36.870 "enable_ktls": false 00:13:36.870 } 00:13:36.870 }, 00:13:36.870 { 00:13:36.870 "method": "sock_impl_set_options", 00:13:36.870 "params": { 00:13:36.870 "impl_name": "posix", 00:13:36.870 "recv_buf_size": 2097152, 00:13:36.870 "send_buf_size": 2097152, 00:13:36.870 "enable_recv_pipe": true, 00:13:36.870 "enable_quickack": false, 00:13:36.870 "enable_placement_id": 0, 00:13:36.870 "enable_zerocopy_send_server": true, 00:13:36.870 "enable_zerocopy_send_client": false, 00:13:36.870 "zerocopy_threshold": 0, 00:13:36.870 "tls_version": 0, 00:13:36.870 "enable_ktls": false 00:13:36.870 } 00:13:36.870 }, 00:13:36.870 { 00:13:36.870 "method": "sock_impl_set_options", 00:13:36.870 "params": { 00:13:36.870 "impl_name": "uring", 00:13:36.870 "recv_buf_size": 2097152, 00:13:36.870 "send_buf_size": 2097152, 00:13:36.870 "enable_recv_pipe": true, 00:13:36.870 "enable_quickack": false, 00:13:36.870 "enable_placement_id": 0, 00:13:36.870 "enable_zerocopy_send_server": false, 00:13:36.870 "enable_zerocopy_send_client": false, 00:13:36.870 "zerocopy_threshold": 0, 00:13:36.870 "tls_version": 0, 00:13:36.870 "enable_ktls": false 00:13:36.870 } 00:13:36.870 } 00:13:36.870 ] 00:13:36.870 }, 00:13:36.870 { 00:13:36.870 "subsystem": "vmd", 00:13:36.870 "config": [] 00:13:36.870 }, 00:13:36.870 { 00:13:36.870 "subsystem": "accel", 00:13:36.870 "config": [ 00:13:36.870 { 00:13:36.870 "method": "accel_set_options", 00:13:36.870 "params": { 00:13:36.870 "small_cache_size": 128, 00:13:36.870 "large_cache_size": 16, 00:13:36.870 "task_count": 2048, 00:13:36.870 "sequence_count": 2048, 00:13:36.870 "buf_count": 2048 00:13:36.870 } 00:13:36.870 } 00:13:36.870 ] 00:13:36.870 }, 00:13:36.870 { 00:13:36.870 "subsystem": "bdev", 00:13:36.870 "config": [ 00:13:36.870 { 00:13:36.870 "method": "bdev_set_options", 00:13:36.870 "params": { 00:13:36.870 "bdev_io_pool_size": 65535, 00:13:36.870 "bdev_io_cache_size": 256, 00:13:36.870 "bdev_auto_examine": true, 00:13:36.870 "iobuf_small_cache_size": 128, 00:13:36.870 "iobuf_large_cache_size": 16 00:13:36.870 } 00:13:36.870 }, 00:13:36.870 { 00:13:36.870 "method": "bdev_raid_set_options", 00:13:36.870 "params": { 00:13:36.870 "process_window_size_kb": 1024, 00:13:36.870 "process_max_bandwidth_mb_sec": 0 00:13:36.870 } 00:13:36.870 }, 00:13:36.870 { 00:13:36.870 "method": "bdev_iscsi_set_options", 00:13:36.870 "params": { 00:13:36.870 "timeout_sec": 30 00:13:36.870 } 00:13:36.870 }, 00:13:36.870 { 00:13:36.870 "method": "bdev_nvme_set_options", 00:13:36.870 "params": { 00:13:36.870 "action_on_timeout": "none", 00:13:36.870 "timeout_us": 0, 00:13:36.870 "timeout_admin_us": 0, 00:13:36.870 "keep_alive_timeout_ms": 10000, 00:13:36.870 "arbitration_burst": 0, 00:13:36.870 "low_priority_weight": 0, 00:13:36.870 "medium_priority_weight": 0, 00:13:36.870 "high_priority_weight": 0, 00:13:36.870 "nvme_adminq_poll_period_us": 10000, 00:13:36.870 "nvme_ioq_poll_period_us": 0, 00:13:36.870 "io_queue_requests": 0, 00:13:36.870 "delay_cmd_submit": true, 00:13:36.870 "transport_retry_count": 4, 00:13:36.870 "bdev_retry_count": 3, 00:13:36.870 "transport_ack_timeout": 0, 00:13:36.870 "ctrlr_loss_timeout_sec": 0, 00:13:36.870 "reconnect_delay_sec": 0, 00:13:36.870 "fast_io_fail_timeout_sec": 0, 00:13:36.870 "disable_auto_failback": false, 00:13:36.870 "generate_uuids": false, 00:13:36.870 "transport_tos": 0, 00:13:36.870 "nvme_error_stat": false, 00:13:36.870 "rdma_srq_size": 0, 00:13:36.870 "io_path_stat": false, 00:13:36.870 "allow_accel_sequence": false, 00:13:36.870 "rdma_max_cq_size": 0, 00:13:36.870 "rdma_cm_event_timeout_ms": 0, 00:13:36.870 "dhchap_digests": [ 00:13:36.870 "sha256", 00:13:36.870 "sha384", 00:13:36.870 "sha512" 00:13:36.870 ], 00:13:36.870 "dhchap_dhgroups": [ 00:13:36.870 "null", 00:13:36.870 "ffdhe2048", 00:13:36.870 "ffdhe3072", 00:13:36.870 "ffdhe4096", 00:13:36.870 "ffdhe6144", 00:13:36.870 "ffdhe8192" 00:13:36.870 ] 00:13:36.870 } 00:13:36.870 }, 00:13:36.870 { 00:13:36.870 "method": "bdev_nvme_set_hotplug", 00:13:36.870 "params": { 00:13:36.870 "period_us": 100000, 00:13:36.870 "enable": false 00:13:36.870 } 00:13:36.870 }, 00:13:36.870 { 00:13:36.870 "method": "bdev_malloc_create", 00:13:36.870 "params": { 00:13:36.870 "name": "malloc0", 00:13:36.870 "num_blocks": 8192, 00:13:36.870 "block_size": 4096, 00:13:36.870 "physical_block_size": 4096, 00:13:36.870 "uuid": "e15a077b-dd2f-40e3-8d99-c5fb1f69fb44", 00:13:36.870 "optimal_io_boundary": 0, 00:13:36.870 "md_size": 0, 00:13:36.870 "dif_type": 0, 00:13:36.870 "dif_is_head_of_md": false, 00:13:36.870 "dif_pi_format": 0 00:13:36.870 } 00:13:36.870 }, 00:13:36.870 { 00:13:36.870 "method": "bdev_wait_for_examine" 00:13:36.870 } 00:13:36.870 ] 00:13:36.870 }, 00:13:36.870 { 00:13:36.870 "subsystem": "nbd", 00:13:36.870 "config": [] 00:13:36.870 }, 00:13:36.870 { 00:13:36.870 "subsystem": "scheduler", 00:13:36.870 "config": [ 00:13:36.870 { 00:13:36.870 "method": "framework_set_scheduler", 00:13:36.870 "params": { 00:13:36.870 "name": "static" 00:13:36.870 } 00:13:36.870 } 00:13:36.870 ] 00:13:36.870 }, 00:13:36.870 { 00:13:36.870 "subsystem": "nvmf", 00:13:36.870 "config": [ 00:13:36.870 { 00:13:36.870 "method": "nvmf_set_config", 00:13:36.870 "params": { 00:13:36.870 "discovery_filter": "match_any", 00:13:36.870 "admin_cmd_passthru": { 00:13:36.870 "identify_ctrlr": false 00:13:36.870 } 00:13:36.870 } 00:13:36.870 }, 00:13:36.870 { 00:13:36.870 "method": "nvmf_set_max_subsystems", 00:13:36.870 "params": { 00:13:36.870 "max_subsystems": 1024 00:13:36.870 } 00:13:36.870 }, 00:13:36.870 { 00:13:36.870 "method": "nvmf_set_crdt", 00:13:36.870 "params": { 00:13:36.870 "crdt1": 0, 00:13:36.870 "crdt2": 0, 00:13:36.870 "crdt3": 0 00:13:36.870 } 00:13:36.870 }, 00:13:36.870 { 00:13:36.870 "method": "nvmf_create_transport", 00:13:36.870 "params": { 00:13:36.870 "trtype": "TCP", 00:13:36.870 "max_queue_depth": 128, 00:13:36.870 "max_io_qpairs_per_ctrlr": 127, 00:13:36.870 "in_capsule_data_size": 4096, 00:13:36.870 "max_io_size": 131072, 00:13:36.870 "io_unit_size": 131072, 00:13:36.870 "max_aq_depth": 128, 00:13:36.870 "num_shared_buffers": 511, 00:13:36.870 "buf_cache_size": 4294967295, 00:13:36.870 "dif_insert_or_strip": false, 00:13:36.870 "zcopy": false, 00:13:36.870 "c2h_success": false, 00:13:36.870 "sock_priority": 0, 00:13:36.870 "abort_timeout_sec": 1, 00:13:36.870 "ack_timeout": 0, 00:13:36.870 "data_wr_pool_size": 0 00:13:36.870 } 00:13:36.870 }, 00:13:36.870 { 00:13:36.870 "method": "nvmf_create_subsystem", 00:13:36.870 "params": { 00:13:36.870 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:36.871 "allow_any_host": false, 00:13:36.871 "serial_number": "SPDK00000000000001", 00:13:36.871 "model_number": "SPDK bdev Controller", 00:13:36.871 "max_namespaces": 10, 00:13:36.871 "min_cntlid": 1, 00:13:36.871 "max_cntlid": 65519, 00:13:36.871 "ana_reporting": false 00:13:36.871 } 00:13:36.871 }, 00:13:36.871 { 00:13:36.871 "method": "nvmf_subsystem_add_host", 00:13:36.871 "params": { 00:13:36.871 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:36.871 "host": "nqn.2016-06.io.spdk:host1", 00:13:36.871 "psk": "/tmp/tmp.1sgGCHDUzq" 00:13:36.871 } 00:13:36.871 }, 00:13:36.871 { 00:13:36.871 "method": "nvmf_subsystem_add_ns", 00:13:36.871 "params": { 00:13:36.871 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:36.871 "namespace": { 00:13:36.871 "nsid": 1, 00:13:36.871 "bdev_name": "malloc0", 00:13:36.871 "nguid": "E15A077BDD2F40E38D99C5FB1F69FB44", 00:13:36.871 "uuid": "e15a077b-dd2f-40e3-8d99-c5fb1f69fb44", 00:13:36.871 "no_auto_visible": false 00:13:36.871 } 00:13:36.871 } 00:13:36.871 }, 00:13:36.871 { 00:13:36.871 "method": "nvmf_subsystem_add_listener", 00:13:36.871 "params": { 00:13:36.871 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:36.871 "listen_address": { 00:13:36.871 "trtype": "TCP", 00:13:36.871 "adrfam": "IPv4", 00:13:36.871 "traddr": "10.0.0.2", 00:13:36.871 "trsvcid": "4420" 00:13:36.871 }, 00:13:36.871 "secure_channel": true 00:13:36.871 } 00:13:36.871 } 00:13:36.871 ] 00:13:36.871 } 00:13:36.871 ] 00:13:36.871 }' 00:13:36.871 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:13:37.128 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:13:37.128 "subsystems": [ 00:13:37.128 { 00:13:37.128 "subsystem": "keyring", 00:13:37.128 "config": [] 00:13:37.128 }, 00:13:37.128 { 00:13:37.128 "subsystem": "iobuf", 00:13:37.128 "config": [ 00:13:37.128 { 00:13:37.128 "method": "iobuf_set_options", 00:13:37.128 "params": { 00:13:37.128 "small_pool_count": 8192, 00:13:37.128 "large_pool_count": 1024, 00:13:37.128 "small_bufsize": 8192, 00:13:37.128 "large_bufsize": 135168 00:13:37.128 } 00:13:37.128 } 00:13:37.128 ] 00:13:37.128 }, 00:13:37.128 { 00:13:37.128 "subsystem": "sock", 00:13:37.128 "config": [ 00:13:37.128 { 00:13:37.128 "method": "sock_set_default_impl", 00:13:37.128 "params": { 00:13:37.128 "impl_name": "uring" 00:13:37.128 } 00:13:37.128 }, 00:13:37.128 { 00:13:37.128 "method": "sock_impl_set_options", 00:13:37.128 "params": { 00:13:37.128 "impl_name": "ssl", 00:13:37.128 "recv_buf_size": 4096, 00:13:37.128 "send_buf_size": 4096, 00:13:37.128 "enable_recv_pipe": true, 00:13:37.128 "enable_quickack": false, 00:13:37.128 "enable_placement_id": 0, 00:13:37.128 "enable_zerocopy_send_server": true, 00:13:37.128 "enable_zerocopy_send_client": false, 00:13:37.128 "zerocopy_threshold": 0, 00:13:37.128 "tls_version": 0, 00:13:37.128 "enable_ktls": false 00:13:37.128 } 00:13:37.128 }, 00:13:37.128 { 00:13:37.128 "method": "sock_impl_set_options", 00:13:37.128 "params": { 00:13:37.128 "impl_name": "posix", 00:13:37.128 "recv_buf_size": 2097152, 00:13:37.128 "send_buf_size": 2097152, 00:13:37.128 "enable_recv_pipe": true, 00:13:37.128 "enable_quickack": false, 00:13:37.128 "enable_placement_id": 0, 00:13:37.128 "enable_zerocopy_send_server": true, 00:13:37.128 "enable_zerocopy_send_client": false, 00:13:37.128 "zerocopy_threshold": 0, 00:13:37.128 "tls_version": 0, 00:13:37.128 "enable_ktls": false 00:13:37.128 } 00:13:37.128 }, 00:13:37.128 { 00:13:37.128 "method": "sock_impl_set_options", 00:13:37.128 "params": { 00:13:37.128 "impl_name": "uring", 00:13:37.128 "recv_buf_size": 2097152, 00:13:37.128 "send_buf_size": 2097152, 00:13:37.128 "enable_recv_pipe": true, 00:13:37.128 "enable_quickack": false, 00:13:37.128 "enable_placement_id": 0, 00:13:37.128 "enable_zerocopy_send_server": false, 00:13:37.128 "enable_zerocopy_send_client": false, 00:13:37.128 "zerocopy_threshold": 0, 00:13:37.128 "tls_version": 0, 00:13:37.128 "enable_ktls": false 00:13:37.128 } 00:13:37.128 } 00:13:37.128 ] 00:13:37.128 }, 00:13:37.128 { 00:13:37.128 "subsystem": "vmd", 00:13:37.128 "config": [] 00:13:37.128 }, 00:13:37.128 { 00:13:37.128 "subsystem": "accel", 00:13:37.128 "config": [ 00:13:37.128 { 00:13:37.128 "method": "accel_set_options", 00:13:37.128 "params": { 00:13:37.128 "small_cache_size": 128, 00:13:37.128 "large_cache_size": 16, 00:13:37.128 "task_count": 2048, 00:13:37.128 "sequence_count": 2048, 00:13:37.128 "buf_count": 2048 00:13:37.128 } 00:13:37.128 } 00:13:37.128 ] 00:13:37.128 }, 00:13:37.128 { 00:13:37.128 "subsystem": "bdev", 00:13:37.128 "config": [ 00:13:37.128 { 00:13:37.128 "method": "bdev_set_options", 00:13:37.128 "params": { 00:13:37.128 "bdev_io_pool_size": 65535, 00:13:37.128 "bdev_io_cache_size": 256, 00:13:37.128 "bdev_auto_examine": true, 00:13:37.128 "iobuf_small_cache_size": 128, 00:13:37.128 "iobuf_large_cache_size": 16 00:13:37.128 } 00:13:37.128 }, 00:13:37.128 { 00:13:37.128 "method": "bdev_raid_set_options", 00:13:37.128 "params": { 00:13:37.128 "process_window_size_kb": 1024, 00:13:37.128 "process_max_bandwidth_mb_sec": 0 00:13:37.128 } 00:13:37.128 }, 00:13:37.128 { 00:13:37.128 "method": "bdev_iscsi_set_options", 00:13:37.128 "params": { 00:13:37.128 "timeout_sec": 30 00:13:37.128 } 00:13:37.128 }, 00:13:37.129 { 00:13:37.129 "method": "bdev_nvme_set_options", 00:13:37.129 "params": { 00:13:37.129 "action_on_timeout": "none", 00:13:37.129 "timeout_us": 0, 00:13:37.129 "timeout_admin_us": 0, 00:13:37.129 "keep_alive_timeout_ms": 10000, 00:13:37.129 "arbitration_burst": 0, 00:13:37.129 "low_priority_weight": 0, 00:13:37.129 "medium_priority_weight": 0, 00:13:37.129 "high_priority_weight": 0, 00:13:37.129 "nvme_adminq_poll_period_us": 10000, 00:13:37.129 "nvme_ioq_poll_period_us": 0, 00:13:37.129 "io_queue_requests": 512, 00:13:37.129 "delay_cmd_submit": true, 00:13:37.129 "transport_retry_count": 4, 00:13:37.129 "bdev_retry_count": 3, 00:13:37.129 "transport_ack_timeout": 0, 00:13:37.129 "ctrlr_loss_timeout_sec": 0, 00:13:37.129 "reconnect_delay_sec": 0, 00:13:37.129 "fast_io_fail_timeout_sec": 0, 00:13:37.129 "disable_auto_failback": false, 00:13:37.129 "generate_uuids": false, 00:13:37.129 "transport_tos": 0, 00:13:37.129 "nvme_error_stat": false, 00:13:37.129 "rdma_srq_size": 0, 00:13:37.129 "io_path_stat": false, 00:13:37.129 "allow_accel_sequence": false, 00:13:37.129 "rdma_max_cq_size": 0, 00:13:37.129 "rdma_cm_event_timeout_ms": 0, 00:13:37.129 "dhchap_digests": [ 00:13:37.129 "sha256", 00:13:37.129 "sha384", 00:13:37.129 "sha512" 00:13:37.129 ], 00:13:37.129 "dhchap_dhgroups": [ 00:13:37.129 "null", 00:13:37.129 "ffdhe2048", 00:13:37.129 "ffdhe3072", 00:13:37.129 "ffdhe4096", 00:13:37.129 "ffdhe6144", 00:13:37.129 "ffdhe8192" 00:13:37.129 ] 00:13:37.129 } 00:13:37.129 }, 00:13:37.129 { 00:13:37.129 "method": "bdev_nvme_attach_controller", 00:13:37.129 "params": { 00:13:37.129 "name": "TLSTEST", 00:13:37.129 "trtype": "TCP", 00:13:37.129 "adrfam": "IPv4", 00:13:37.129 "traddr": "10.0.0.2", 00:13:37.129 "trsvcid": "4420", 00:13:37.129 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:37.129 "prchk_reftag": false, 00:13:37.129 "prchk_guard": false, 00:13:37.129 "ctrlr_loss_timeout_sec": 0, 00:13:37.129 "reconnect_delay_sec": 0, 00:13:37.129 "fast_io_fail_timeout_sec": 0, 00:13:37.129 "psk": "/tmp/tmp.1sgGCHDUzq", 00:13:37.129 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:37.129 "hdgst": false, 00:13:37.129 "ddgst": false 00:13:37.129 } 00:13:37.129 }, 00:13:37.129 { 00:13:37.129 "method": "bdev_nvme_set_hotplug", 00:13:37.129 "params": { 00:13:37.129 "period_us": 100000, 00:13:37.129 "enable": false 00:13:37.129 } 00:13:37.129 }, 00:13:37.129 { 00:13:37.129 "method": "bdev_wait_for_examine" 00:13:37.129 } 00:13:37.129 ] 00:13:37.129 }, 00:13:37.129 { 00:13:37.129 "subsystem": "nbd", 00:13:37.129 "config": [] 00:13:37.129 } 00:13:37.129 ] 00:13:37.129 }' 00:13:37.129 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 73042 00:13:37.129 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73042 ']' 00:13:37.129 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73042 00:13:37.129 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:37.129 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:37.129 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73042 00:13:37.403 killing process with pid 73042 00:13:37.403 Received shutdown signal, test time was about 10.000000 seconds 00:13:37.403 00:13:37.403 Latency(us) 00:13:37.403 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.403 =================================================================================================================== 00:13:37.403 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:37.403 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:37.403 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:37.403 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73042' 00:13:37.403 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73042 00:13:37.403 [2024-07-25 10:52:06.871610] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:37.403 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73042 00:13:37.403 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 72993 00:13:37.403 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72993 ']' 00:13:37.403 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72993 00:13:37.403 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:37.403 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:37.403 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72993 00:13:37.403 killing process with pid 72993 00:13:37.403 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:37.403 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:37.403 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72993' 00:13:37.403 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72993 00:13:37.403 [2024-07-25 10:52:07.113725] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:37.403 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72993 00:13:37.987 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:13:37.987 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:37.987 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:13:37.987 "subsystems": [ 00:13:37.987 { 00:13:37.987 "subsystem": "keyring", 00:13:37.987 "config": [] 00:13:37.987 }, 00:13:37.987 { 00:13:37.987 "subsystem": "iobuf", 00:13:37.987 "config": [ 00:13:37.987 { 00:13:37.987 "method": "iobuf_set_options", 00:13:37.987 "params": { 00:13:37.987 "small_pool_count": 8192, 00:13:37.987 "large_pool_count": 1024, 00:13:37.987 "small_bufsize": 8192, 00:13:37.987 "large_bufsize": 135168 00:13:37.987 } 00:13:37.987 } 00:13:37.987 ] 00:13:37.987 }, 00:13:37.987 { 00:13:37.987 "subsystem": "sock", 00:13:37.987 "config": [ 00:13:37.987 { 00:13:37.988 "method": "sock_set_default_impl", 00:13:37.988 "params": { 00:13:37.988 "impl_name": "uring" 00:13:37.988 } 00:13:37.988 }, 00:13:37.988 { 00:13:37.988 "method": "sock_impl_set_options", 00:13:37.988 "params": { 00:13:37.988 "impl_name": "ssl", 00:13:37.988 "recv_buf_size": 4096, 00:13:37.988 "send_buf_size": 4096, 00:13:37.988 "enable_recv_pipe": true, 00:13:37.988 "enable_quickack": false, 00:13:37.988 "enable_placement_id": 0, 00:13:37.988 "enable_zerocopy_send_server": true, 00:13:37.988 "enable_zerocopy_send_client": false, 00:13:37.988 "zerocopy_threshold": 0, 00:13:37.988 "tls_version": 0, 00:13:37.988 "enable_ktls": false 00:13:37.988 } 00:13:37.988 }, 00:13:37.988 { 00:13:37.988 "method": "sock_impl_set_options", 00:13:37.988 "params": { 00:13:37.988 "impl_name": "posix", 00:13:37.988 "recv_buf_size": 2097152, 00:13:37.988 "send_buf_size": 2097152, 00:13:37.988 "enable_recv_pipe": true, 00:13:37.988 "enable_quickack": false, 00:13:37.988 "enable_placement_id": 0, 00:13:37.988 "enable_zerocopy_send_server": true, 00:13:37.988 "enable_zerocopy_send_client": false, 00:13:37.988 "zerocopy_threshold": 0, 00:13:37.988 "tls_version": 0, 00:13:37.988 "enable_ktls": false 00:13:37.988 } 00:13:37.988 }, 00:13:37.988 { 00:13:37.988 "method": "sock_impl_set_options", 00:13:37.988 "params": { 00:13:37.988 "impl_name": "uring", 00:13:37.988 "recv_buf_size": 2097152, 00:13:37.988 "send_buf_size": 2097152, 00:13:37.988 "enable_recv_pipe": true, 00:13:37.988 "enable_quickack": false, 00:13:37.988 "enable_placement_id": 0, 00:13:37.988 "enable_zerocopy_send_server": false, 00:13:37.988 "enable_zerocopy_send_client": false, 00:13:37.988 "zerocopy_threshold": 0, 00:13:37.988 "tls_version": 0, 00:13:37.988 "enable_ktls": false 00:13:37.988 } 00:13:37.988 } 00:13:37.988 ] 00:13:37.988 }, 00:13:37.988 { 00:13:37.988 "subsystem": "vmd", 00:13:37.988 "config": [] 00:13:37.988 }, 00:13:37.988 { 00:13:37.988 "subsystem": "accel", 00:13:37.988 "config": [ 00:13:37.988 { 00:13:37.988 "method": "accel_set_options", 00:13:37.988 "params": { 00:13:37.988 "small_cache_size": 128, 00:13:37.988 "large_cache_size": 16, 00:13:37.988 "task_count": 2048, 00:13:37.988 "sequence_count": 2048, 00:13:37.988 "buf_count": 2048 00:13:37.988 } 00:13:37.988 } 00:13:37.988 ] 00:13:37.988 }, 00:13:37.988 { 00:13:37.988 "subsystem": "bdev", 00:13:37.988 "config": [ 00:13:37.988 { 00:13:37.988 "method": "bdev_set_options", 00:13:37.988 "params": { 00:13:37.988 "bdev_io_pool_size": 65535, 00:13:37.988 "bdev_io_cache_size": 256, 00:13:37.988 "bdev_auto_examine": true, 00:13:37.988 "iobuf_small_cache_size": 128, 00:13:37.988 "iobuf_large_cache_size": 16 00:13:37.988 } 00:13:37.988 }, 00:13:37.988 { 00:13:37.988 "method": "bdev_raid_set_options", 00:13:37.988 "params": { 00:13:37.988 "process_window_size_kb": 1024, 00:13:37.988 "process_max_bandwidth_mb_sec": 0 00:13:37.988 } 00:13:37.988 }, 00:13:37.988 { 00:13:37.988 "method": "bdev_iscsi_set_options", 00:13:37.988 "params": { 00:13:37.988 "timeout_sec": 30 00:13:37.988 } 00:13:37.988 }, 00:13:37.988 { 00:13:37.988 "method": "bdev_nvme_set_options", 00:13:37.988 "params": { 00:13:37.988 "action_on_timeout": "none", 00:13:37.988 "timeout_us": 0, 00:13:37.988 "timeout_admin_us": 0, 00:13:37.988 "keep_alive_timeout_ms": 10000, 00:13:37.988 "arbitration_burst": 0, 00:13:37.988 "low_priority_weight": 0, 00:13:37.988 "medium_priority_weight": 0, 00:13:37.988 "high_priority_weight": 0, 00:13:37.988 "nvme_adminq_poll_period_us": 10000, 00:13:37.988 "nvme_ioq_poll_period_us": 0, 00:13:37.988 "io_queue_requests": 0, 00:13:37.988 "delay_cmd_submit": true, 00:13:37.988 "transport_retry_count": 4, 00:13:37.988 "bdev_retry_count": 3, 00:13:37.988 "transport_ack_timeout": 0, 00:13:37.988 "ctrlr_loss_timeout_sec": 0, 00:13:37.988 "reconnect_delay_sec": 0, 00:13:37.988 "fast_io_fail_timeout_sec": 0, 00:13:37.988 "disable_aut 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:37.988 o_failback": false, 00:13:37.988 "generate_uuids": false, 00:13:37.988 "transport_tos": 0, 00:13:37.988 "nvme_error_stat": false, 00:13:37.988 "rdma_srq_size": 0, 00:13:37.988 "io_path_stat": false, 00:13:37.988 "allow_accel_sequence": false, 00:13:37.988 "rdma_max_cq_size": 0, 00:13:37.988 "rdma_cm_event_timeout_ms": 0, 00:13:37.988 "dhchap_digests": [ 00:13:37.988 "sha256", 00:13:37.988 "sha384", 00:13:37.988 "sha512" 00:13:37.988 ], 00:13:37.988 "dhchap_dhgroups": [ 00:13:37.988 "null", 00:13:37.988 "ffdhe2048", 00:13:37.988 "ffdhe3072", 00:13:37.988 "ffdhe4096", 00:13:37.988 "ffdhe6144", 00:13:37.988 "ffdhe8192" 00:13:37.988 ] 00:13:37.988 } 00:13:37.988 }, 00:13:37.988 { 00:13:37.988 "method": "bdev_nvme_set_hotplug", 00:13:37.988 "params": { 00:13:37.988 "period_us": 100000, 00:13:37.988 "enable": false 00:13:37.988 } 00:13:37.988 }, 00:13:37.988 { 00:13:37.988 "method": "bdev_malloc_create", 00:13:37.988 "params": { 00:13:37.988 "name": "malloc0", 00:13:37.988 "num_blocks": 8192, 00:13:37.988 "block_size": 4096, 00:13:37.988 "physical_block_size": 4096, 00:13:37.988 "uuid": "e15a077b-dd2f-40e3-8d99-c5fb1f69fb44", 00:13:37.988 "optimal_io_boundary": 0, 00:13:37.988 "md_size": 0, 00:13:37.988 "dif_type": 0, 00:13:37.988 "dif_is_head_of_md": false, 00:13:37.988 "dif_pi_format": 0 00:13:37.988 } 00:13:37.988 }, 00:13:37.988 { 00:13:37.988 "method": "bdev_wait_for_examine" 00:13:37.988 } 00:13:37.988 ] 00:13:37.988 }, 00:13:37.988 { 00:13:37.988 "subsystem": "nbd", 00:13:37.988 "config": [] 00:13:37.988 }, 00:13:37.988 { 00:13:37.988 "subsystem": "scheduler", 00:13:37.988 "config": [ 00:13:37.988 { 00:13:37.988 "method": "framework_set_scheduler", 00:13:37.988 "params": { 00:13:37.988 "name": "static" 00:13:37.988 } 00:13:37.988 } 00:13:37.988 ] 00:13:37.988 }, 00:13:37.988 { 00:13:37.988 "subsystem": "nvmf", 00:13:37.988 "config": [ 00:13:37.988 { 00:13:37.988 "method": "nvmf_set_config", 00:13:37.988 "params": { 00:13:37.988 "discovery_filter": "match_any", 00:13:37.988 "admin_cmd_passthru": { 00:13:37.988 "identify_ctrlr": false 00:13:37.988 } 00:13:37.988 } 00:13:37.988 }, 00:13:37.988 { 00:13:37.988 "method": "nvmf_set_max_subsystems", 00:13:37.988 "params": { 00:13:37.988 "max_subsystems": 1024 00:13:37.988 } 00:13:37.988 }, 00:13:37.988 { 00:13:37.988 "method": "nvmf_set_crdt", 00:13:37.988 "params": { 00:13:37.988 "crdt1": 0, 00:13:37.988 "crdt2": 0, 00:13:37.988 "crdt3": 0 00:13:37.988 } 00:13:37.988 }, 00:13:37.988 { 00:13:37.988 "method": "nvmf_create_transport", 00:13:37.988 "params": { 00:13:37.988 "trtype": "TCP", 00:13:37.988 "max_queue_depth": 128, 00:13:37.988 "max_io_qpairs_per_ctrlr": 127, 00:13:37.988 "in_capsule_data_size": 4096, 00:13:37.988 "max_io_size": 131072, 00:13:37.988 "io_unit_size": 131072, 00:13:37.988 "max_aq_depth": 128, 00:13:37.988 "num_shared_buffers": 511, 00:13:37.988 "buf_cache_size": 4294967295, 00:13:37.988 "dif_insert_or_strip": false, 00:13:37.988 "zcopy": false, 00:13:37.988 "c2h_success": false, 00:13:37.988 "sock_priority": 0, 00:13:37.989 "abort_timeout_sec": 1, 00:13:37.989 "ack_timeout": 0, 00:13:37.989 "data_wr_pool_size": 0 00:13:37.989 } 00:13:37.989 }, 00:13:37.989 { 00:13:37.989 "method": "nvmf_create_subsystem", 00:13:37.989 "params": { 00:13:37.989 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:37.989 "allow_any_host": false, 00:13:37.989 "serial_number": "SPDK00000000000001", 00:13:37.989 "model_number": "SPDK bdev Controller", 00:13:37.989 "max_namespaces": 10, 00:13:37.989 "min_cntlid": 1, 00:13:37.989 "max_cntlid": 65519, 00:13:37.989 "ana_reporting": false 00:13:37.989 } 00:13:37.989 }, 00:13:37.989 { 00:13:37.989 "method": "nvmf_subsystem_add_host", 00:13:37.989 "params": { 00:13:37.989 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:37.989 "host": "nqn.2016-06.io.spdk:host1", 00:13:37.989 "psk": "/tmp/tmp.1sgGCHDUzq" 00:13:37.989 } 00:13:37.989 }, 00:13:37.989 { 00:13:37.989 "method": "nvmf_subsystem_add_ns", 00:13:37.989 "params": { 00:13:37.989 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:37.989 "namespace": { 00:13:37.989 "nsid": 1, 00:13:37.989 "bdev_name": "malloc0", 00:13:37.989 "nguid": "E15A077BDD2F40E38D99C5FB1F69FB44", 00:13:37.989 "uuid": "e15a077b-dd2f-40e3-8d99-c5fb1f69fb44", 00:13:37.989 "no_auto_visible": false 00:13:37.989 } 00:13:37.989 } 00:13:37.989 }, 00:13:37.989 { 00:13:37.989 "method": "nvmf_subsystem_add_listener", 00:13:37.989 "params": { 00:13:37.989 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:37.989 "listen_address": { 00:13:37.989 "trtype": "TCP", 00:13:37.989 "adrfam": "IPv4", 00:13:37.989 "traddr": "10.0.0.2", 00:13:37.989 "trsvcid": "4420" 00:13:37.989 }, 00:13:37.989 "secure_channel": true 00:13:37.989 } 00:13:37.989 } 00:13:37.989 ] 00:13:37.989 } 00:13:37.989 ] 00:13:37.989 }' 00:13:37.989 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:37.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.989 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73096 00:13:37.989 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:13:37.989 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73096 00:13:37.989 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73096 ']' 00:13:37.989 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.989 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:37.989 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.989 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:37.989 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:37.989 [2024-07-25 10:52:07.506413] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:37.989 [2024-07-25 10:52:07.506886] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:37.989 [2024-07-25 10:52:07.657393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.248 [2024-07-25 10:52:07.757698] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:38.248 [2024-07-25 10:52:07.758070] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:38.248 [2024-07-25 10:52:07.758090] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:38.248 [2024-07-25 10:52:07.758099] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:38.248 [2024-07-25 10:52:07.758108] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:38.248 [2024-07-25 10:52:07.758205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:38.248 [2024-07-25 10:52:07.926556] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:38.506 [2024-07-25 10:52:07.991844] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:38.506 [2024-07-25 10:52:08.007764] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:38.506 [2024-07-25 10:52:08.023790] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:38.506 [2024-07-25 10:52:08.035056] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:39.073 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:39.073 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:39.073 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:39.073 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:39.073 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:39.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:39.073 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:39.073 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=73128 00:13:39.073 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 73128 /var/tmp/bdevperf.sock 00:13:39.073 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73128 ']' 00:13:39.073 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:39.073 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:39.073 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:39.073 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:39.073 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:39.073 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:13:39.074 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:13:39.074 "subsystems": [ 00:13:39.074 { 00:13:39.074 "subsystem": "keyring", 00:13:39.074 "config": [] 00:13:39.074 }, 00:13:39.074 { 00:13:39.074 "subsystem": "iobuf", 00:13:39.074 "config": [ 00:13:39.074 { 00:13:39.074 "method": "iobuf_set_options", 00:13:39.074 "params": { 00:13:39.074 "small_pool_count": 8192, 00:13:39.074 "large_pool_count": 1024, 00:13:39.074 "small_bufsize": 8192, 00:13:39.074 "large_bufsize": 135168 00:13:39.074 } 00:13:39.074 } 00:13:39.074 ] 00:13:39.074 }, 00:13:39.074 { 00:13:39.074 "subsystem": "sock", 00:13:39.074 "config": [ 00:13:39.074 { 00:13:39.074 "method": "sock_set_default_impl", 00:13:39.074 "params": { 00:13:39.074 "impl_name": "uring" 00:13:39.074 } 00:13:39.074 }, 00:13:39.074 { 00:13:39.074 "method": "sock_impl_set_options", 00:13:39.074 "params": { 00:13:39.074 "impl_name": "ssl", 00:13:39.074 "recv_buf_size": 4096, 00:13:39.074 "send_buf_size": 4096, 00:13:39.074 "enable_recv_pipe": true, 00:13:39.074 "enable_quickack": false, 00:13:39.074 "enable_placement_id": 0, 00:13:39.074 "enable_zerocopy_send_server": true, 00:13:39.074 "enable_zerocopy_send_client": false, 00:13:39.074 "zerocopy_threshold": 0, 00:13:39.074 "tls_version": 0, 00:13:39.074 "enable_ktls": false 00:13:39.074 } 00:13:39.074 }, 00:13:39.074 { 00:13:39.074 "method": "sock_impl_set_options", 00:13:39.074 "params": { 00:13:39.074 "impl_name": "posix", 00:13:39.074 "recv_buf_size": 2097152, 00:13:39.074 "send_buf_size": 2097152, 00:13:39.074 "enable_recv_pipe": true, 00:13:39.074 "enable_quickack": false, 00:13:39.074 "enable_placement_id": 0, 00:13:39.074 "enable_zerocopy_send_server": true, 00:13:39.074 "enable_zerocopy_send_client": false, 00:13:39.074 "zerocopy_threshold": 0, 00:13:39.074 "tls_version": 0, 00:13:39.074 "enable_ktls": false 00:13:39.074 } 00:13:39.074 }, 00:13:39.074 { 00:13:39.074 "method": "sock_impl_set_options", 00:13:39.074 "params": { 00:13:39.074 "impl_name": "uring", 00:13:39.074 "recv_buf_size": 2097152, 00:13:39.074 "send_buf_size": 2097152, 00:13:39.074 "enable_recv_pipe": true, 00:13:39.074 "enable_quickack": false, 00:13:39.074 "enable_placement_id": 0, 00:13:39.074 "enable_zerocopy_send_server": false, 00:13:39.074 "enable_zerocopy_send_client": false, 00:13:39.074 "zerocopy_threshold": 0, 00:13:39.074 "tls_version": 0, 00:13:39.074 "enable_ktls": false 00:13:39.074 } 00:13:39.074 } 00:13:39.074 ] 00:13:39.074 }, 00:13:39.074 { 00:13:39.074 "subsystem": "vmd", 00:13:39.074 "config": [] 00:13:39.074 }, 00:13:39.074 { 00:13:39.074 "subsystem": "accel", 00:13:39.074 "config": [ 00:13:39.074 { 00:13:39.074 "method": "accel_set_options", 00:13:39.074 "params": { 00:13:39.074 "small_cache_size": 128, 00:13:39.074 "large_cache_size": 16, 00:13:39.074 "task_count": 2048, 00:13:39.074 "sequence_count": 2048, 00:13:39.074 "buf_count": 2048 00:13:39.074 } 00:13:39.074 } 00:13:39.074 ] 00:13:39.074 }, 00:13:39.074 { 00:13:39.074 "subsystem": "bdev", 00:13:39.074 "config": [ 00:13:39.074 { 00:13:39.074 "method": "bdev_set_options", 00:13:39.074 "params": { 00:13:39.074 "bdev_io_pool_size": 65535, 00:13:39.074 "bdev_io_cache_size": 256, 00:13:39.074 "bdev_auto_examine": true, 00:13:39.074 "iobuf_small_cache_size": 128, 00:13:39.074 "iobuf_large_cache_size": 16 00:13:39.074 } 00:13:39.074 }, 00:13:39.074 { 00:13:39.074 "method": "bdev_raid_set_options", 00:13:39.074 "params": { 00:13:39.074 "process_window_size_kb": 1024, 00:13:39.074 "process_max_bandwidth_mb_sec": 0 00:13:39.074 } 00:13:39.074 }, 00:13:39.074 { 00:13:39.074 "method": "bdev_iscsi_set_options", 00:13:39.074 "params": { 00:13:39.074 "timeout_sec": 30 00:13:39.074 } 00:13:39.074 }, 00:13:39.074 { 00:13:39.074 "method": "bdev_nvme_set_options", 00:13:39.074 "params": { 00:13:39.074 "action_on_timeout": "none", 00:13:39.074 "timeout_us": 0, 00:13:39.074 "timeout_admin_us": 0, 00:13:39.074 "keep_alive_timeout_ms": 10000, 00:13:39.074 "arbitration_burst": 0, 00:13:39.074 "low_priority_weight": 0, 00:13:39.074 "medium_priority_weight": 0, 00:13:39.074 "high_priority_weight": 0, 00:13:39.074 "nvme_adminq_poll_period_us": 10000, 00:13:39.074 "nvme_ioq_poll_period_us": 0, 00:13:39.074 "io_queue_requests": 512, 00:13:39.074 "delay_cmd_submit": true, 00:13:39.074 "transport_retry_count": 4, 00:13:39.074 "bdev_retry_count": 3, 00:13:39.074 "transport_ack_timeout": 0, 00:13:39.074 "ctrlr_loss_timeout_sec": 0, 00:13:39.074 "reconnect_delay_sec": 0, 00:13:39.074 "fast_io_fail_timeout_sec": 0, 00:13:39.074 "disable_auto_failback": false, 00:13:39.074 "generate_uuids": false, 00:13:39.074 "transport_tos": 0, 00:13:39.074 "nvme_error_stat": false, 00:13:39.074 "rdma_srq_size": 0, 00:13:39.074 "io_path_stat": false, 00:13:39.074 "allow_accel_sequence": false, 00:13:39.074 "rdma_max_cq_size": 0, 00:13:39.074 "rdma_cm_event_timeout_ms": 0, 00:13:39.074 "dhchap_digests": [ 00:13:39.074 "sha256", 00:13:39.074 "sha384", 00:13:39.074 "sha512" 00:13:39.074 ], 00:13:39.074 "dhchap_dhgroups": [ 00:13:39.074 "null", 00:13:39.074 "ffdhe2048", 00:13:39.074 "ffdhe3072", 00:13:39.074 "ffdhe4096", 00:13:39.074 "ffdhe6144", 00:13:39.074 "ffdhe8192" 00:13:39.074 ] 00:13:39.074 } 00:13:39.074 }, 00:13:39.074 { 00:13:39.074 "method": "bdev_nvme_attach_controller", 00:13:39.074 "params": { 00:13:39.074 "name": "TLSTEST", 00:13:39.074 "trtype": "TCP", 00:13:39.074 "adrfam": "IPv4", 00:13:39.074 "traddr": "10.0.0.2", 00:13:39.074 "trsvcid": "4420", 00:13:39.074 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:39.074 "prchk_reftag": false, 00:13:39.074 "prchk_guard": false, 00:13:39.074 "ctrlr_loss_timeout_sec": 0, 00:13:39.074 "reconnect_delay_sec": 0, 00:13:39.074 "fast_io_fail_timeout_sec": 0, 00:13:39.074 "psk": "/tmp/tmp.1sgGCHDUzq", 00:13:39.074 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:39.074 "hdgst": false, 00:13:39.074 "ddgst": false 00:13:39.074 } 00:13:39.074 }, 00:13:39.074 { 00:13:39.074 "method": "bdev_nvme_set_hotplug", 00:13:39.074 "params": { 00:13:39.074 "period_us": 100000, 00:13:39.074 "enable": false 00:13:39.074 } 00:13:39.074 }, 00:13:39.074 { 00:13:39.074 "method": "bdev_wait_for_examine" 00:13:39.074 } 00:13:39.074 ] 00:13:39.074 }, 00:13:39.074 { 00:13:39.074 "subsystem": "nbd", 00:13:39.074 "config": [] 00:13:39.074 } 00:13:39.074 ] 00:13:39.074 }' 00:13:39.074 [2024-07-25 10:52:08.641822] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:39.074 [2024-07-25 10:52:08.641952] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73128 ] 00:13:39.074 [2024-07-25 10:52:08.778459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.333 [2024-07-25 10:52:08.885216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:39.333 [2024-07-25 10:52:09.019356] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:39.333 [2024-07-25 10:52:09.057320] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:39.333 [2024-07-25 10:52:09.057465] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:39.898 10:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:39.898 10:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:39.898 10:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:40.156 Running I/O for 10 seconds... 00:13:50.131 00:13:50.131 Latency(us) 00:13:50.131 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:50.131 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:50.131 Verification LBA range: start 0x0 length 0x2000 00:13:50.131 TLSTESTn1 : 10.01 4027.85 15.73 0.00 0.00 31718.94 5898.24 30980.65 00:13:50.131 =================================================================================================================== 00:13:50.131 Total : 4027.85 15.73 0.00 0.00 31718.94 5898.24 30980.65 00:13:50.131 0 00:13:50.131 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:50.131 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 73128 00:13:50.131 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73128 ']' 00:13:50.131 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73128 00:13:50.131 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:50.131 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:50.131 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73128 00:13:50.131 killing process with pid 73128 00:13:50.131 Received shutdown signal, test time was about 10.000000 seconds 00:13:50.131 00:13:50.131 Latency(us) 00:13:50.131 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:50.131 =================================================================================================================== 00:13:50.131 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:50.131 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:50.131 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:50.131 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73128' 00:13:50.131 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73128 00:13:50.131 [2024-07-25 10:52:19.745356] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:50.131 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73128 00:13:50.426 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 73096 00:13:50.426 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73096 ']' 00:13:50.426 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73096 00:13:50.426 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:50.426 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:50.426 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73096 00:13:50.426 killing process with pid 73096 00:13:50.426 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:50.426 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:50.426 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73096' 00:13:50.426 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73096 00:13:50.426 [2024-07-25 10:52:19.999919] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:50.426 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73096 00:13:50.685 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:13:50.685 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:50.685 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:50.685 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:50.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.685 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73261 00:13:50.685 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:50.685 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73261 00:13:50.685 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73261 ']' 00:13:50.685 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.685 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:50.685 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.685 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:50.685 10:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:50.685 [2024-07-25 10:52:20.298088] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:50.685 [2024-07-25 10:52:20.298415] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:50.944 [2024-07-25 10:52:20.441079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.944 [2024-07-25 10:52:20.550014] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:50.944 [2024-07-25 10:52:20.550292] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:50.944 [2024-07-25 10:52:20.550465] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:50.944 [2024-07-25 10:52:20.550609] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:50.944 [2024-07-25 10:52:20.550651] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:50.944 [2024-07-25 10:52:20.550828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.944 [2024-07-25 10:52:20.607163] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:51.511 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:51.511 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:51.511 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:51.511 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:51.511 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:51.769 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:51.769 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.1sgGCHDUzq 00:13:51.769 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.1sgGCHDUzq 00:13:51.769 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:51.770 [2024-07-25 10:52:21.469501] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:51.770 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:52.028 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:52.287 [2024-07-25 10:52:21.961696] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:52.287 [2024-07-25 10:52:21.962070] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:52.287 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:52.546 malloc0 00:13:52.546 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:52.804 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1sgGCHDUzq 00:13:53.063 [2024-07-25 10:52:22.730827] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:53.063 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:13:53.063 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=73316 00:13:53.063 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:53.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:53.063 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 73316 /var/tmp/bdevperf.sock 00:13:53.063 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73316 ']' 00:13:53.063 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:53.063 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:53.063 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:53.063 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:53.063 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:53.322 [2024-07-25 10:52:22.805061] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:53.322 [2024-07-25 10:52:22.805351] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73316 ] 00:13:53.322 [2024-07-25 10:52:22.943040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.322 [2024-07-25 10:52:23.041719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:53.580 [2024-07-25 10:52:23.098697] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:54.145 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:54.145 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:54.145 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.1sgGCHDUzq 00:13:54.404 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:13:54.662 [2024-07-25 10:52:24.152315] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:54.662 nvme0n1 00:13:54.662 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:54.662 Running I/O for 1 seconds... 00:13:56.038 00:13:56.038 Latency(us) 00:13:56.038 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:56.038 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:56.038 Verification LBA range: start 0x0 length 0x2000 00:13:56.038 nvme0n1 : 1.03 3800.91 14.85 0.00 0.00 33141.32 7238.75 21567.30 00:13:56.038 =================================================================================================================== 00:13:56.038 Total : 3800.91 14.85 0.00 0.00 33141.32 7238.75 21567.30 00:13:56.038 0 00:13:56.038 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 73316 00:13:56.038 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73316 ']' 00:13:56.038 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73316 00:13:56.038 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:56.038 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:56.038 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73316 00:13:56.038 killing process with pid 73316 00:13:56.038 Received shutdown signal, test time was about 1.000000 seconds 00:13:56.038 00:13:56.038 Latency(us) 00:13:56.038 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:56.038 =================================================================================================================== 00:13:56.038 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:56.038 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:56.038 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:56.038 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73316' 00:13:56.038 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73316 00:13:56.038 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73316 00:13:56.038 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 73261 00:13:56.038 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73261 ']' 00:13:56.038 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73261 00:13:56.038 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:56.038 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:56.038 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73261 00:13:56.038 killing process with pid 73261 00:13:56.038 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:56.038 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:56.038 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73261' 00:13:56.038 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73261 00:13:56.038 [2024-07-25 10:52:25.688638] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:56.038 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73261 00:13:56.297 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:13:56.297 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:56.297 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:56.297 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:56.297 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:56.297 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73367 00:13:56.297 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73367 00:13:56.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.297 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73367 ']' 00:13:56.297 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.297 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:56.297 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.297 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:56.297 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:56.297 [2024-07-25 10:52:25.973125] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:56.297 [2024-07-25 10:52:25.973416] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:56.558 [2024-07-25 10:52:26.107069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.558 [2024-07-25 10:52:26.201669] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:56.558 [2024-07-25 10:52:26.202034] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:56.558 [2024-07-25 10:52:26.202192] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:56.558 [2024-07-25 10:52:26.202363] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:56.558 [2024-07-25 10:52:26.202409] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:56.558 [2024-07-25 10:52:26.202441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.558 [2024-07-25 10:52:26.256633] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:57.498 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:57.498 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:57.498 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:57.498 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:57.498 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:57.498 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:57.498 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:13:57.498 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.498 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:57.498 [2024-07-25 10:52:27.042206] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:57.498 malloc0 00:13:57.498 [2024-07-25 10:52:27.074208] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:57.498 [2024-07-25 10:52:27.074664] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:57.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:57.498 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.498 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=73399 00:13:57.498 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:13:57.498 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 73399 /var/tmp/bdevperf.sock 00:13:57.498 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73399 ']' 00:13:57.498 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:57.498 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:57.498 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:57.498 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:57.498 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:57.498 [2024-07-25 10:52:27.162335] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:57.498 [2024-07-25 10:52:27.162634] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73399 ] 00:13:57.765 [2024-07-25 10:52:27.302515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.765 [2024-07-25 10:52:27.415446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:57.765 [2024-07-25 10:52:27.472092] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:58.701 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:58.701 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:58.701 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.1sgGCHDUzq 00:13:58.702 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:13:58.960 [2024-07-25 10:52:28.601766] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:58.960 nvme0n1 00:13:58.960 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:59.219 Running I/O for 1 seconds... 00:14:00.155 00:14:00.155 Latency(us) 00:14:00.155 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:00.155 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:00.155 Verification LBA range: start 0x0 length 0x2000 00:14:00.155 nvme0n1 : 1.02 4041.67 15.79 0.00 0.00 31285.81 1921.40 22163.08 00:14:00.155 =================================================================================================================== 00:14:00.155 Total : 4041.67 15.79 0.00 0.00 31285.81 1921.40 22163.08 00:14:00.155 0 00:14:00.155 10:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:14:00.155 10:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.155 10:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:00.413 10:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.413 10:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:14:00.413 "subsystems": [ 00:14:00.413 { 00:14:00.413 "subsystem": "keyring", 00:14:00.413 "config": [ 00:14:00.413 { 00:14:00.413 "method": "keyring_file_add_key", 00:14:00.413 "params": { 00:14:00.413 "name": "key0", 00:14:00.414 "path": "/tmp/tmp.1sgGCHDUzq" 00:14:00.414 } 00:14:00.414 } 00:14:00.414 ] 00:14:00.414 }, 00:14:00.414 { 00:14:00.414 "subsystem": "iobuf", 00:14:00.414 "config": [ 00:14:00.414 { 00:14:00.414 "method": "iobuf_set_options", 00:14:00.414 "params": { 00:14:00.414 "small_pool_count": 8192, 00:14:00.414 "large_pool_count": 1024, 00:14:00.414 "small_bufsize": 8192, 00:14:00.414 "large_bufsize": 135168 00:14:00.414 } 00:14:00.414 } 00:14:00.414 ] 00:14:00.414 }, 00:14:00.414 { 00:14:00.414 "subsystem": "sock", 00:14:00.414 "config": [ 00:14:00.414 { 00:14:00.414 "method": "sock_set_default_impl", 00:14:00.414 "params": { 00:14:00.414 "impl_name": "uring" 00:14:00.414 } 00:14:00.414 }, 00:14:00.414 { 00:14:00.414 "method": "sock_impl_set_options", 00:14:00.414 "params": { 00:14:00.414 "impl_name": "ssl", 00:14:00.414 "recv_buf_size": 4096, 00:14:00.414 "send_buf_size": 4096, 00:14:00.414 "enable_recv_pipe": true, 00:14:00.414 "enable_quickack": false, 00:14:00.414 "enable_placement_id": 0, 00:14:00.414 "enable_zerocopy_send_server": true, 00:14:00.414 "enable_zerocopy_send_client": false, 00:14:00.414 "zerocopy_threshold": 0, 00:14:00.414 "tls_version": 0, 00:14:00.414 "enable_ktls": false 00:14:00.414 } 00:14:00.414 }, 00:14:00.414 { 00:14:00.414 "method": "sock_impl_set_options", 00:14:00.414 "params": { 00:14:00.414 "impl_name": "posix", 00:14:00.414 "recv_buf_size": 2097152, 00:14:00.414 "send_buf_size": 2097152, 00:14:00.414 "enable_recv_pipe": true, 00:14:00.414 "enable_quickack": false, 00:14:00.414 "enable_placement_id": 0, 00:14:00.414 "enable_zerocopy_send_server": true, 00:14:00.414 "enable_zerocopy_send_client": false, 00:14:00.414 "zerocopy_threshold": 0, 00:14:00.414 "tls_version": 0, 00:14:00.414 "enable_ktls": false 00:14:00.414 } 00:14:00.414 }, 00:14:00.414 { 00:14:00.414 "method": "sock_impl_set_options", 00:14:00.414 "params": { 00:14:00.414 "impl_name": "uring", 00:14:00.414 "recv_buf_size": 2097152, 00:14:00.414 "send_buf_size": 2097152, 00:14:00.414 "enable_recv_pipe": true, 00:14:00.414 "enable_quickack": false, 00:14:00.414 "enable_placement_id": 0, 00:14:00.414 "enable_zerocopy_send_server": false, 00:14:00.414 "enable_zerocopy_send_client": false, 00:14:00.414 "zerocopy_threshold": 0, 00:14:00.414 "tls_version": 0, 00:14:00.414 "enable_ktls": false 00:14:00.414 } 00:14:00.414 } 00:14:00.414 ] 00:14:00.414 }, 00:14:00.414 { 00:14:00.414 "subsystem": "vmd", 00:14:00.414 "config": [] 00:14:00.414 }, 00:14:00.414 { 00:14:00.414 "subsystem": "accel", 00:14:00.414 "config": [ 00:14:00.414 { 00:14:00.414 "method": "accel_set_options", 00:14:00.414 "params": { 00:14:00.414 "small_cache_size": 128, 00:14:00.414 "large_cache_size": 16, 00:14:00.414 "task_count": 2048, 00:14:00.414 "sequence_count": 2048, 00:14:00.414 "buf_count": 2048 00:14:00.414 } 00:14:00.414 } 00:14:00.414 ] 00:14:00.414 }, 00:14:00.414 { 00:14:00.414 "subsystem": "bdev", 00:14:00.414 "config": [ 00:14:00.414 { 00:14:00.414 "method": "bdev_set_options", 00:14:00.414 "params": { 00:14:00.414 "bdev_io_pool_size": 65535, 00:14:00.414 "bdev_io_cache_size": 256, 00:14:00.414 "bdev_auto_examine": true, 00:14:00.414 "iobuf_small_cache_size": 128, 00:14:00.414 "iobuf_large_cache_size": 16 00:14:00.414 } 00:14:00.414 }, 00:14:00.414 { 00:14:00.414 "method": "bdev_raid_set_options", 00:14:00.414 "params": { 00:14:00.414 "process_window_size_kb": 1024, 00:14:00.414 "process_max_bandwidth_mb_sec": 0 00:14:00.414 } 00:14:00.414 }, 00:14:00.414 { 00:14:00.414 "method": "bdev_iscsi_set_options", 00:14:00.414 "params": { 00:14:00.414 "timeout_sec": 30 00:14:00.414 } 00:14:00.414 }, 00:14:00.414 { 00:14:00.414 "method": "bdev_nvme_set_options", 00:14:00.414 "params": { 00:14:00.414 "action_on_timeout": "none", 00:14:00.414 "timeout_us": 0, 00:14:00.414 "timeout_admin_us": 0, 00:14:00.414 "keep_alive_timeout_ms": 10000, 00:14:00.414 "arbitration_burst": 0, 00:14:00.414 "low_priority_weight": 0, 00:14:00.414 "medium_priority_weight": 0, 00:14:00.414 "high_priority_weight": 0, 00:14:00.414 "nvme_adminq_poll_period_us": 10000, 00:14:00.414 "nvme_ioq_poll_period_us": 0, 00:14:00.414 "io_queue_requests": 0, 00:14:00.414 "delay_cmd_submit": true, 00:14:00.414 "transport_retry_count": 4, 00:14:00.414 "bdev_retry_count": 3, 00:14:00.414 "transport_ack_timeout": 0, 00:14:00.414 "ctrlr_loss_timeout_sec": 0, 00:14:00.414 "reconnect_delay_sec": 0, 00:14:00.414 "fast_io_fail_timeout_sec": 0, 00:14:00.414 "disable_auto_failback": false, 00:14:00.414 "generate_uuids": false, 00:14:00.414 "transport_tos": 0, 00:14:00.414 "nvme_error_stat": false, 00:14:00.414 "rdma_srq_size": 0, 00:14:00.414 "io_path_stat": false, 00:14:00.414 "allow_accel_sequence": false, 00:14:00.414 "rdma_max_cq_size": 0, 00:14:00.414 "rdma_cm_event_timeout_ms": 0, 00:14:00.414 "dhchap_digests": [ 00:14:00.414 "sha256", 00:14:00.414 "sha384", 00:14:00.414 "sha512" 00:14:00.414 ], 00:14:00.414 "dhchap_dhgroups": [ 00:14:00.414 "null", 00:14:00.414 "ffdhe2048", 00:14:00.414 "ffdhe3072", 00:14:00.414 "ffdhe4096", 00:14:00.414 "ffdhe6144", 00:14:00.414 "ffdhe8192" 00:14:00.414 ] 00:14:00.414 } 00:14:00.414 }, 00:14:00.414 { 00:14:00.414 "method": "bdev_nvme_set_hotplug", 00:14:00.414 "params": { 00:14:00.414 "period_us": 100000, 00:14:00.414 "enable": false 00:14:00.414 } 00:14:00.414 }, 00:14:00.414 { 00:14:00.414 "method": "bdev_malloc_create", 00:14:00.414 "params": { 00:14:00.414 "name": "malloc0", 00:14:00.414 "num_blocks": 8192, 00:14:00.414 "block_size": 4096, 00:14:00.414 "physical_block_size": 4096, 00:14:00.414 "uuid": "9a3c36dd-5f67-4aa5-8e27-eb71e5b83447", 00:14:00.414 "optimal_io_boundary": 0, 00:14:00.414 "md_size": 0, 00:14:00.414 "dif_type": 0, 00:14:00.414 "dif_is_head_of_md": false, 00:14:00.414 "dif_pi_format": 0 00:14:00.414 } 00:14:00.414 }, 00:14:00.414 { 00:14:00.414 "method": "bdev_wait_for_examine" 00:14:00.414 } 00:14:00.414 ] 00:14:00.414 }, 00:14:00.414 { 00:14:00.414 "subsystem": "nbd", 00:14:00.414 "config": [] 00:14:00.414 }, 00:14:00.414 { 00:14:00.414 "subsystem": "scheduler", 00:14:00.414 "config": [ 00:14:00.414 { 00:14:00.414 "method": "framework_set_scheduler", 00:14:00.414 "params": { 00:14:00.414 "name": "static" 00:14:00.414 } 00:14:00.414 } 00:14:00.414 ] 00:14:00.414 }, 00:14:00.414 { 00:14:00.414 "subsystem": "nvmf", 00:14:00.414 "config": [ 00:14:00.414 { 00:14:00.414 "method": "nvmf_set_config", 00:14:00.414 "params": { 00:14:00.414 "discovery_filter": "match_any", 00:14:00.414 "admin_cmd_passthru": { 00:14:00.414 "identify_ctrlr": false 00:14:00.414 } 00:14:00.414 } 00:14:00.414 }, 00:14:00.414 { 00:14:00.414 "method": "nvmf_set_max_subsystems", 00:14:00.414 "params": { 00:14:00.414 "max_subsystems": 1024 00:14:00.414 } 00:14:00.414 }, 00:14:00.414 { 00:14:00.414 "method": "nvmf_set_crdt", 00:14:00.414 "params": { 00:14:00.414 "crdt1": 0, 00:14:00.414 "crdt2": 0, 00:14:00.414 "crdt3": 0 00:14:00.414 } 00:14:00.414 }, 00:14:00.414 { 00:14:00.414 "method": "nvmf_create_transport", 00:14:00.414 "params": { 00:14:00.414 "trtype": "TCP", 00:14:00.414 "max_queue_depth": 128, 00:14:00.414 "max_io_qpairs_per_ctrlr": 127, 00:14:00.414 "in_capsule_data_size": 4096, 00:14:00.414 "max_io_size": 131072, 00:14:00.414 "io_unit_size": 131072, 00:14:00.414 "max_aq_depth": 128, 00:14:00.414 "num_shared_buffers": 511, 00:14:00.414 "buf_cache_size": 4294967295, 00:14:00.414 "dif_insert_or_strip": false, 00:14:00.414 "zcopy": false, 00:14:00.414 "c2h_success": false, 00:14:00.414 "sock_priority": 0, 00:14:00.414 "abort_timeout_sec": 1, 00:14:00.414 "ack_timeout": 0, 00:14:00.414 "data_wr_pool_size": 0 00:14:00.414 } 00:14:00.414 }, 00:14:00.414 { 00:14:00.414 "method": "nvmf_create_subsystem", 00:14:00.414 "params": { 00:14:00.414 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:00.414 "allow_any_host": false, 00:14:00.414 "serial_number": "00000000000000000000", 00:14:00.414 "model_number": "SPDK bdev Controller", 00:14:00.414 "max_namespaces": 32, 00:14:00.414 "min_cntlid": 1, 00:14:00.414 "max_cntlid": 65519, 00:14:00.414 "ana_reporting": false 00:14:00.414 } 00:14:00.414 }, 00:14:00.414 { 00:14:00.414 "method": "nvmf_subsystem_add_host", 00:14:00.414 "params": { 00:14:00.414 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:00.415 "host": "nqn.2016-06.io.spdk:host1", 00:14:00.415 "psk": "key0" 00:14:00.415 } 00:14:00.415 }, 00:14:00.415 { 00:14:00.415 "method": "nvmf_subsystem_add_ns", 00:14:00.415 "params": { 00:14:00.415 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:00.415 "namespace": { 00:14:00.415 "nsid": 1, 00:14:00.415 "bdev_name": "malloc0", 00:14:00.415 "nguid": "9A3C36DD5F674AA58E27EB71E5B83447", 00:14:00.415 "uuid": "9a3c36dd-5f67-4aa5-8e27-eb71e5b83447", 00:14:00.415 "no_auto_visible": false 00:14:00.415 } 00:14:00.415 } 00:14:00.415 }, 00:14:00.415 { 00:14:00.415 "method": "nvmf_subsystem_add_listener", 00:14:00.415 "params": { 00:14:00.415 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:00.415 "listen_address": { 00:14:00.415 "trtype": "TCP", 00:14:00.415 "adrfam": "IPv4", 00:14:00.415 "traddr": "10.0.0.2", 00:14:00.415 "trsvcid": "4420" 00:14:00.415 }, 00:14:00.415 "secure_channel": false, 00:14:00.415 "sock_impl": "ssl" 00:14:00.415 } 00:14:00.415 } 00:14:00.415 ] 00:14:00.415 } 00:14:00.415 ] 00:14:00.415 }' 00:14:00.415 10:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:00.673 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:14:00.673 "subsystems": [ 00:14:00.673 { 00:14:00.673 "subsystem": "keyring", 00:14:00.673 "config": [ 00:14:00.673 { 00:14:00.673 "method": "keyring_file_add_key", 00:14:00.673 "params": { 00:14:00.673 "name": "key0", 00:14:00.673 "path": "/tmp/tmp.1sgGCHDUzq" 00:14:00.673 } 00:14:00.673 } 00:14:00.673 ] 00:14:00.673 }, 00:14:00.673 { 00:14:00.673 "subsystem": "iobuf", 00:14:00.673 "config": [ 00:14:00.673 { 00:14:00.673 "method": "iobuf_set_options", 00:14:00.673 "params": { 00:14:00.673 "small_pool_count": 8192, 00:14:00.673 "large_pool_count": 1024, 00:14:00.673 "small_bufsize": 8192, 00:14:00.673 "large_bufsize": 135168 00:14:00.673 } 00:14:00.673 } 00:14:00.673 ] 00:14:00.673 }, 00:14:00.673 { 00:14:00.673 "subsystem": "sock", 00:14:00.673 "config": [ 00:14:00.673 { 00:14:00.673 "method": "sock_set_default_impl", 00:14:00.673 "params": { 00:14:00.673 "impl_name": "uring" 00:14:00.673 } 00:14:00.673 }, 00:14:00.673 { 00:14:00.673 "method": "sock_impl_set_options", 00:14:00.674 "params": { 00:14:00.674 "impl_name": "ssl", 00:14:00.674 "recv_buf_size": 4096, 00:14:00.674 "send_buf_size": 4096, 00:14:00.674 "enable_recv_pipe": true, 00:14:00.674 "enable_quickack": false, 00:14:00.674 "enable_placement_id": 0, 00:14:00.674 "enable_zerocopy_send_server": true, 00:14:00.674 "enable_zerocopy_send_client": false, 00:14:00.674 "zerocopy_threshold": 0, 00:14:00.674 "tls_version": 0, 00:14:00.674 "enable_ktls": false 00:14:00.674 } 00:14:00.674 }, 00:14:00.674 { 00:14:00.674 "method": "sock_impl_set_options", 00:14:00.674 "params": { 00:14:00.674 "impl_name": "posix", 00:14:00.674 "recv_buf_size": 2097152, 00:14:00.674 "send_buf_size": 2097152, 00:14:00.674 "enable_recv_pipe": true, 00:14:00.674 "enable_quickack": false, 00:14:00.674 "enable_placement_id": 0, 00:14:00.674 "enable_zerocopy_send_server": true, 00:14:00.674 "enable_zerocopy_send_client": false, 00:14:00.674 "zerocopy_threshold": 0, 00:14:00.674 "tls_version": 0, 00:14:00.674 "enable_ktls": false 00:14:00.674 } 00:14:00.674 }, 00:14:00.674 { 00:14:00.674 "method": "sock_impl_set_options", 00:14:00.674 "params": { 00:14:00.674 "impl_name": "uring", 00:14:00.674 "recv_buf_size": 2097152, 00:14:00.674 "send_buf_size": 2097152, 00:14:00.674 "enable_recv_pipe": true, 00:14:00.674 "enable_quickack": false, 00:14:00.674 "enable_placement_id": 0, 00:14:00.674 "enable_zerocopy_send_server": false, 00:14:00.674 "enable_zerocopy_send_client": false, 00:14:00.674 "zerocopy_threshold": 0, 00:14:00.674 "tls_version": 0, 00:14:00.674 "enable_ktls": false 00:14:00.674 } 00:14:00.674 } 00:14:00.674 ] 00:14:00.674 }, 00:14:00.674 { 00:14:00.674 "subsystem": "vmd", 00:14:00.674 "config": [] 00:14:00.674 }, 00:14:00.674 { 00:14:00.674 "subsystem": "accel", 00:14:00.674 "config": [ 00:14:00.674 { 00:14:00.674 "method": "accel_set_options", 00:14:00.674 "params": { 00:14:00.674 "small_cache_size": 128, 00:14:00.674 "large_cache_size": 16, 00:14:00.674 "task_count": 2048, 00:14:00.674 "sequence_count": 2048, 00:14:00.674 "buf_count": 2048 00:14:00.674 } 00:14:00.674 } 00:14:00.674 ] 00:14:00.674 }, 00:14:00.674 { 00:14:00.674 "subsystem": "bdev", 00:14:00.674 "config": [ 00:14:00.674 { 00:14:00.674 "method": "bdev_set_options", 00:14:00.674 "params": { 00:14:00.674 "bdev_io_pool_size": 65535, 00:14:00.674 "bdev_io_cache_size": 256, 00:14:00.674 "bdev_auto_examine": true, 00:14:00.674 "iobuf_small_cache_size": 128, 00:14:00.674 "iobuf_large_cache_size": 16 00:14:00.674 } 00:14:00.674 }, 00:14:00.674 { 00:14:00.674 "method": "bdev_raid_set_options", 00:14:00.674 "params": { 00:14:00.674 "process_window_size_kb": 1024, 00:14:00.674 "process_max_bandwidth_mb_sec": 0 00:14:00.674 } 00:14:00.674 }, 00:14:00.674 { 00:14:00.674 "method": "bdev_iscsi_set_options", 00:14:00.674 "params": { 00:14:00.674 "timeout_sec": 30 00:14:00.674 } 00:14:00.674 }, 00:14:00.674 { 00:14:00.674 "method": "bdev_nvme_set_options", 00:14:00.674 "params": { 00:14:00.674 "action_on_timeout": "none", 00:14:00.674 "timeout_us": 0, 00:14:00.674 "timeout_admin_us": 0, 00:14:00.674 "keep_alive_timeout_ms": 10000, 00:14:00.674 "arbitration_burst": 0, 00:14:00.674 "low_priority_weight": 0, 00:14:00.674 "medium_priority_weight": 0, 00:14:00.674 "high_priority_weight": 0, 00:14:00.674 "nvme_adminq_poll_period_us": 10000, 00:14:00.674 "nvme_ioq_poll_period_us": 0, 00:14:00.674 "io_queue_requests": 512, 00:14:00.674 "delay_cmd_submit": true, 00:14:00.674 "transport_retry_count": 4, 00:14:00.674 "bdev_retry_count": 3, 00:14:00.674 "transport_ack_timeout": 0, 00:14:00.674 "ctrlr_loss_timeout_sec": 0, 00:14:00.674 "reconnect_delay_sec": 0, 00:14:00.674 "fast_io_fail_timeout_sec": 0, 00:14:00.674 "disable_auto_failback": false, 00:14:00.674 "generate_uuids": false, 00:14:00.674 "transport_tos": 0, 00:14:00.674 "nvme_error_stat": false, 00:14:00.674 "rdma_srq_size": 0, 00:14:00.674 "io_path_stat": false, 00:14:00.674 "allow_accel_sequence": false, 00:14:00.674 "rdma_max_cq_size": 0, 00:14:00.674 "rdma_cm_event_timeout_ms": 0, 00:14:00.674 "dhchap_digests": [ 00:14:00.674 "sha256", 00:14:00.674 "sha384", 00:14:00.674 "sha512" 00:14:00.674 ], 00:14:00.674 "dhchap_dhgroups": [ 00:14:00.674 "null", 00:14:00.674 "ffdhe2048", 00:14:00.674 "ffdhe3072", 00:14:00.674 "ffdhe4096", 00:14:00.674 "ffdhe6144", 00:14:00.674 "ffdhe8192" 00:14:00.674 ] 00:14:00.674 } 00:14:00.674 }, 00:14:00.674 { 00:14:00.674 "method": "bdev_nvme_attach_controller", 00:14:00.674 "params": { 00:14:00.674 "name": "nvme0", 00:14:00.674 "trtype": "TCP", 00:14:00.674 "adrfam": "IPv4", 00:14:00.674 "traddr": "10.0.0.2", 00:14:00.674 "trsvcid": "4420", 00:14:00.674 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:00.674 "prchk_reftag": false, 00:14:00.674 "prchk_guard": false, 00:14:00.674 "ctrlr_loss_timeout_sec": 0, 00:14:00.674 "reconnect_delay_sec": 0, 00:14:00.674 "fast_io_fail_timeout_sec": 0, 00:14:00.674 "psk": "key0", 00:14:00.674 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:00.674 "hdgst": false, 00:14:00.674 "ddgst": false 00:14:00.674 } 00:14:00.674 }, 00:14:00.674 { 00:14:00.674 "method": "bdev_nvme_set_hotplug", 00:14:00.674 "params": { 00:14:00.674 "period_us": 100000, 00:14:00.674 "enable": false 00:14:00.674 } 00:14:00.674 }, 00:14:00.674 { 00:14:00.674 "method": "bdev_enable_histogram", 00:14:00.674 "params": { 00:14:00.674 "name": "nvme0n1", 00:14:00.674 "enable": true 00:14:00.674 } 00:14:00.674 }, 00:14:00.674 { 00:14:00.674 "method": "bdev_wait_for_examine" 00:14:00.674 } 00:14:00.674 ] 00:14:00.674 }, 00:14:00.674 { 00:14:00.674 "subsystem": "nbd", 00:14:00.674 "config": [] 00:14:00.674 } 00:14:00.674 ] 00:14:00.674 }' 00:14:00.674 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 73399 00:14:00.674 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73399 ']' 00:14:00.674 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73399 00:14:00.674 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:00.674 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:00.674 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73399 00:14:00.674 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:00.674 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:00.675 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73399' 00:14:00.675 killing process with pid 73399 00:14:00.675 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73399 00:14:00.675 Received shutdown signal, test time was about 1.000000 seconds 00:14:00.675 00:14:00.675 Latency(us) 00:14:00.675 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:00.675 =================================================================================================================== 00:14:00.675 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:00.675 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73399 00:14:00.933 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 73367 00:14:00.933 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73367 ']' 00:14:00.934 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73367 00:14:00.934 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:00.934 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:00.934 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73367 00:14:00.934 killing process with pid 73367 00:14:00.934 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:00.934 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:00.934 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73367' 00:14:00.934 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73367 00:14:00.934 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73367 00:14:01.193 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:14:01.193 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:01.193 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:14:01.193 "subsystems": [ 00:14:01.193 { 00:14:01.193 "subsystem": "keyring", 00:14:01.193 "config": [ 00:14:01.193 { 00:14:01.193 "method": "keyring_file_add_key", 00:14:01.193 "params": { 00:14:01.193 "name": "key0", 00:14:01.193 "path": "/tmp/tmp.1sgGCHDUzq" 00:14:01.193 } 00:14:01.193 } 00:14:01.193 ] 00:14:01.193 }, 00:14:01.193 { 00:14:01.193 "subsystem": "iobuf", 00:14:01.193 "config": [ 00:14:01.193 { 00:14:01.193 "method": "iobuf_set_options", 00:14:01.193 "params": { 00:14:01.193 "small_pool_count": 8192, 00:14:01.193 "large_pool_count": 1024, 00:14:01.193 "small_bufsize": 8192, 00:14:01.193 "large_bufsize": 135168 00:14:01.193 } 00:14:01.193 } 00:14:01.193 ] 00:14:01.193 }, 00:14:01.193 { 00:14:01.193 "subsystem": "sock", 00:14:01.193 "config": [ 00:14:01.193 { 00:14:01.193 "method": "sock_set_default_impl", 00:14:01.193 "params": { 00:14:01.193 "impl_name": "uring" 00:14:01.193 } 00:14:01.193 }, 00:14:01.193 { 00:14:01.193 "method": "sock_impl_set_options", 00:14:01.193 "params": { 00:14:01.193 "impl_name": "ssl", 00:14:01.193 "recv_buf_size": 4096, 00:14:01.193 "send_buf_size": 4096, 00:14:01.193 "enable_recv_pipe": true, 00:14:01.193 "enable_quickack": false, 00:14:01.193 "enable_placement_id": 0, 00:14:01.193 "enable_zerocopy_send_server": true, 00:14:01.193 "enable_zerocopy_send_client": false, 00:14:01.193 "zerocopy_threshold": 0, 00:14:01.193 "tls_version": 0, 00:14:01.193 "enable_ktls": false 00:14:01.193 } 00:14:01.193 }, 00:14:01.193 { 00:14:01.193 "method": "sock_impl_set_options", 00:14:01.193 "params": { 00:14:01.193 "impl_name": "posix", 00:14:01.193 "recv_buf_size": 2097152, 00:14:01.193 "send_buf_size": 2097152, 00:14:01.193 "enable_recv_pipe": true, 00:14:01.193 "enable_quickack": false, 00:14:01.193 "enable_placement_id": 0, 00:14:01.193 "enable_zerocopy_send_server": true, 00:14:01.193 "enable_zerocopy_send_client": false, 00:14:01.193 "zerocopy_threshold": 0, 00:14:01.193 "tls_version": 0, 00:14:01.193 "enable_ktls": false 00:14:01.193 } 00:14:01.193 }, 00:14:01.193 { 00:14:01.193 "method": "sock_impl_set_options", 00:14:01.193 "params": { 00:14:01.193 "impl_name": "uring", 00:14:01.193 "recv_buf_size": 2097152, 00:14:01.193 "send_buf_size": 2097152, 00:14:01.193 "enable_recv_pipe": true, 00:14:01.193 "enable_quickack": false, 00:14:01.193 "enable_placement_id": 0, 00:14:01.193 "enable_zerocopy_send_server": false, 00:14:01.193 "enable_zerocopy_send_client": false, 00:14:01.193 "zerocopy_threshold": 0, 00:14:01.193 "tls_version": 0, 00:14:01.193 "enable_ktls": false 00:14:01.193 } 00:14:01.193 } 00:14:01.193 ] 00:14:01.193 }, 00:14:01.193 { 00:14:01.193 "subsystem": "vmd", 00:14:01.193 "config": [] 00:14:01.193 }, 00:14:01.193 { 00:14:01.193 "subsystem": "accel", 00:14:01.193 "config": [ 00:14:01.193 { 00:14:01.193 "method": "accel_set_options", 00:14:01.193 "params": { 00:14:01.193 "small_cache_size": 128, 00:14:01.193 "large_cache_size": 16, 00:14:01.193 "task_count": 2048, 00:14:01.193 "sequence_count": 2048, 00:14:01.193 "buf_count": 2048 00:14:01.193 } 00:14:01.193 } 00:14:01.193 ] 00:14:01.193 }, 00:14:01.193 { 00:14:01.193 "subsystem": "bdev", 00:14:01.193 "config": [ 00:14:01.193 { 00:14:01.193 "method": "bdev_set_options", 00:14:01.193 "params": { 00:14:01.193 "bdev_io_pool_size": 65535, 00:14:01.193 "bdev_io_cache_size": 256, 00:14:01.193 "bdev_auto_examine": true, 00:14:01.193 "iobuf_small_cache_size": 128, 00:14:01.193 "iobuf_large_cache_size": 16 00:14:01.193 } 00:14:01.193 }, 00:14:01.193 { 00:14:01.193 "method": "bdev_raid_set_options", 00:14:01.193 "params": { 00:14:01.193 "process_window_size_kb": 1024, 00:14:01.193 "process_max_bandwidth_mb_sec": 0 00:14:01.193 } 00:14:01.193 }, 00:14:01.193 { 00:14:01.193 "method": "bdev_iscsi_set_options", 00:14:01.193 "params": { 00:14:01.193 "timeout_sec": 30 00:14:01.193 } 00:14:01.193 }, 00:14:01.193 { 00:14:01.193 "method": "bdev_nvme_set_options", 00:14:01.193 "params": { 00:14:01.193 "action_on_timeout": "none", 00:14:01.193 "timeout_us": 0, 00:14:01.193 "timeout_admin_us": 0, 00:14:01.193 "keep_alive_timeout_ms": 10000, 00:14:01.193 "arbitration_burst": 0, 00:14:01.193 "low_priority_weight": 0, 00:14:01.193 "medium_priority_weight": 0, 00:14:01.193 "high_priority_weight": 0, 00:14:01.193 "nvme_adminq_poll_period_us": 10000, 00:14:01.193 "nvme_ioq_poll_period_us": 0, 00:14:01.193 "io_queue_requests": 0, 00:14:01.193 "delay_cmd_submit": true, 00:14:01.193 "transport_retry_count": 4, 00:14:01.193 "bdev_retry_count": 3, 00:14:01.193 "transport_ack_timeout": 0, 00:14:01.193 "ctrlr_loss_timeout_sec": 0, 00:14:01.193 "reconnect_delay_sec": 0, 00:14:01.193 "fast_io_fail_timeout_sec": 0, 00:14:01.193 "disable_auto_failback": false, 00:14:01.193 "generate_uuids": false, 00:14:01.193 "transport_tos": 0, 00:14:01.193 "nvme_error_stat": false, 00:14:01.193 "rdma_srq_size": 0, 00:14:01.193 "io_path_stat": false, 00:14:01.193 "allow_accel_sequence": false, 00:14:01.194 "rdma_max_cq_size": 0, 00:14:01.194 "rdma_cm_event_timeout_ms": 0, 00:14:01.194 "dhchap_digests": [ 00:14:01.194 "sha256", 00:14:01.194 "sha384", 00:14:01.194 "sha512" 00:14:01.194 ], 00:14:01.194 "dhchap_dhgroups": [ 00:14:01.194 "null", 00:14:01.194 "ffdhe2048", 00:14:01.194 "ffdhe3072", 00:14:01.194 "ffdhe4096", 00:14:01.194 "ffdhe6144", 00:14:01.194 "ffdhe8192" 00:14:01.194 ] 00:14:01.194 } 00:14:01.194 }, 00:14:01.194 { 00:14:01.194 "method": "bdev_nvme_set_hotplug", 00:14:01.194 "params": { 00:14:01.194 "period_us": 100000, 00:14:01.194 "enable": false 00:14:01.194 } 00:14:01.194 }, 00:14:01.194 { 00:14:01.194 "method": "bdev_malloc_create", 00:14:01.194 "params": { 00:14:01.194 "name": "malloc0", 00:14:01.194 "num_blocks": 8192, 00:14:01.194 "block_size": 4096, 00:14:01.194 "physical_block_size": 4096, 00:14:01.194 "uuid": "9a3c36dd-5f67-4aa5-8e27-eb71e5b83447", 00:14:01.194 "optimal_io_boundary": 0, 00:14:01.194 "md_size": 0, 00:14:01.194 "dif_type": 0, 00:14:01.194 "dif_is_head_of_md": false, 00:14:01.194 "dif_pi_format": 0 00:14:01.194 } 00:14:01.194 }, 00:14:01.194 { 00:14:01.194 "method": "bdev_wait_for_examine" 00:14:01.194 } 00:14:01.194 ] 00:14:01.194 }, 00:14:01.194 { 00:14:01.194 "subsystem": "nbd", 00:14:01.194 "config": [] 00:14:01.194 }, 00:14:01.194 { 00:14:01.194 "subsystem": "scheduler", 00:14:01.194 "config": [ 00:14:01.194 { 00:14:01.194 "method": "framework_set_scheduler", 00:14:01.194 "params": { 00:14:01.194 "name": "static" 00:14:01.194 } 00:14:01.194 } 00:14:01.194 ] 00:14:01.194 }, 00:14:01.194 { 00:14:01.194 "subsystem": "nvmf", 00:14:01.194 "config": [ 00:14:01.194 { 00:14:01.194 "method": "nvmf_set_config", 00:14:01.194 "params": { 00:14:01.194 "discovery_filter": "match_any", 00:14:01.194 "admin_cmd_passthru": { 00:14:01.194 "identify_ctrlr": false 00:14:01.194 } 00:14:01.194 } 00:14:01.194 }, 00:14:01.194 { 00:14:01.194 "method": "nvmf_set_max_subsystems", 00:14:01.194 "params": { 00:14:01.194 "max_subsystems": 1024 00:14:01.194 } 00:14:01.194 }, 00:14:01.194 { 00:14:01.194 "method": "nvmf_set_crdt", 00:14:01.194 "params": { 00:14:01.194 "crdt1": 0, 00:14:01.194 "crdt2": 0, 00:14:01.194 "crdt3": 0 00:14:01.194 } 00:14:01.194 }, 00:14:01.194 { 00:14:01.194 "method": "nvmf_create_transport", 00:14:01.194 "params": { 00:14:01.194 "trtype": "TCP", 00:14:01.194 "max_queue_depth": 128, 00:14:01.194 "max_io_qpairs_per_ctrlr": 127, 00:14:01.194 "in_capsule_data_size": 4096, 00:14:01.194 "max_io_size": 131072, 00:14:01.194 "io_unit_size": 131072, 00:14:01.194 "max_aq_depth": 128, 00:14:01.194 "num_shared_buffers": 511, 00:14:01.194 "buf_cache_size": 4294967295, 00:14:01.194 "dif_insert_or_strip": false, 00:14:01.194 "zcopy": false, 00:14:01.194 "c2h_success": false, 00:14:01.194 "sock_priority": 0, 00:14:01.194 "abort_timeout_sec": 1, 00:14:01.194 "ack_timeout": 0, 00:14:01.194 "data_wr_pool_size": 0 00:14:01.194 } 00:14:01.194 }, 00:14:01.194 { 00:14:01.194 "method": "nvmf_create_subsystem", 00:14:01.194 "params": { 00:14:01.194 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:01.194 "allow_any_host": false, 00:14:01.194 "serial_number": "00000000000000000000", 00:14:01.194 "model_number": "SPDK bdev Controller", 00:14:01.194 "max_namespaces": 32, 00:14:01.194 "min_cntlid": 1, 00:14:01.194 "max_cntlid": 65519, 00:14:01.194 "ana_reporting": false 00:14:01.194 } 00:14:01.194 }, 00:14:01.194 { 00:14:01.194 "method": "nvmf_subsystem_add_host", 00:14:01.194 "params": { 00:14:01.194 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:01.194 "host": "nqn.2016-06.io.spdk:host1", 00:14:01.194 "psk": "key0" 00:14:01.194 } 00:14:01.194 }, 00:14:01.194 { 00:14:01.194 "method": "nvmf_subsystem_add_ns", 00:14:01.194 "params": { 00:14:01.194 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:01.194 "namespace": { 00:14:01.194 "nsid": 1, 00:14:01.194 "bdev_name": "malloc0", 00:14:01.194 "nguid": "9A3C36DD5F674AA58E27EB71E5B83447", 00:14:01.194 "uuid": "9a3c36dd-5f67-4aa5-8e27-eb71e5b83447", 00:14:01.194 "no_auto_visible": false 00:14:01.194 } 00:14:01.194 } 00:14:01.194 }, 00:14:01.194 { 00:14:01.194 "method": "nvmf_subsystem_add_listener", 00:14:01.194 "params": { 00:14:01.194 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:01.194 "listen_address": { 00:14:01.194 "trtype": "TCP", 00:14:01.194 "adrfam": "IPv4", 00:14:01.194 "traddr": "10.0.0.2", 00:14:01.194 "trsvcid": "4420" 00:14:01.194 }, 00:14:01.194 "secure_channel": false, 00:14:01.194 "sock_impl": "ssl" 00:14:01.194 } 00:14:01.194 } 00:14:01.194 ] 00:14:01.194 } 00:14:01.194 ] 00:14:01.194 }' 00:14:01.194 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:01.194 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:01.194 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:14:01.194 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73460 00:14:01.194 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73460 00:14:01.194 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73460 ']' 00:14:01.194 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.194 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:01.194 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.194 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:01.194 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:01.194 [2024-07-25 10:52:30.896320] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:01.194 [2024-07-25 10:52:30.896426] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:01.452 [2024-07-25 10:52:31.031645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.452 [2024-07-25 10:52:31.135659] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:01.452 [2024-07-25 10:52:31.135719] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:01.452 [2024-07-25 10:52:31.135748] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:01.452 [2024-07-25 10:52:31.135757] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:01.452 [2024-07-25 10:52:31.135765] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:01.452 [2024-07-25 10:52:31.135853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.710 [2024-07-25 10:52:31.307554] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:01.710 [2024-07-25 10:52:31.387123] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:01.710 [2024-07-25 10:52:31.419051] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:01.710 [2024-07-25 10:52:31.429083] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:02.277 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:02.277 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:02.277 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:02.277 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:02.277 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:02.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:02.277 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.277 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=73492 00:14:02.277 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 73492 /var/tmp/bdevperf.sock 00:14:02.277 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73492 ']' 00:14:02.277 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:02.277 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:02.277 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:02.277 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:02.277 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:14:02.277 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:02.277 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:14:02.277 "subsystems": [ 00:14:02.277 { 00:14:02.277 "subsystem": "keyring", 00:14:02.277 "config": [ 00:14:02.277 { 00:14:02.277 "method": "keyring_file_add_key", 00:14:02.277 "params": { 00:14:02.277 "name": "key0", 00:14:02.277 "path": "/tmp/tmp.1sgGCHDUzq" 00:14:02.277 } 00:14:02.277 } 00:14:02.277 ] 00:14:02.277 }, 00:14:02.277 { 00:14:02.277 "subsystem": "iobuf", 00:14:02.277 "config": [ 00:14:02.277 { 00:14:02.277 "method": "iobuf_set_options", 00:14:02.277 "params": { 00:14:02.277 "small_pool_count": 8192, 00:14:02.277 "large_pool_count": 1024, 00:14:02.277 "small_bufsize": 8192, 00:14:02.277 "large_bufsize": 135168 00:14:02.277 } 00:14:02.277 } 00:14:02.277 ] 00:14:02.277 }, 00:14:02.277 { 00:14:02.277 "subsystem": "sock", 00:14:02.277 "config": [ 00:14:02.277 { 00:14:02.277 "method": "sock_set_default_impl", 00:14:02.277 "params": { 00:14:02.277 "impl_name": "uring" 00:14:02.277 } 00:14:02.277 }, 00:14:02.277 { 00:14:02.277 "method": "sock_impl_set_options", 00:14:02.277 "params": { 00:14:02.277 "impl_name": "ssl", 00:14:02.277 "recv_buf_size": 4096, 00:14:02.277 "send_buf_size": 4096, 00:14:02.277 "enable_recv_pipe": true, 00:14:02.277 "enable_quickack": false, 00:14:02.277 "enable_placement_id": 0, 00:14:02.277 "enable_zerocopy_send_server": true, 00:14:02.277 "enable_zerocopy_send_client": false, 00:14:02.277 "zerocopy_threshold": 0, 00:14:02.277 "tls_version": 0, 00:14:02.277 "enable_ktls": false 00:14:02.277 } 00:14:02.277 }, 00:14:02.277 { 00:14:02.277 "method": "sock_impl_set_options", 00:14:02.277 "params": { 00:14:02.277 "impl_name": "posix", 00:14:02.277 "recv_buf_size": 2097152, 00:14:02.277 "send_buf_size": 2097152, 00:14:02.277 "enable_recv_pipe": true, 00:14:02.277 "enable_quickack": false, 00:14:02.277 "enable_placement_id": 0, 00:14:02.277 "enable_zerocopy_send_server": true, 00:14:02.277 "enable_zerocopy_send_client": false, 00:14:02.277 "zerocopy_threshold": 0, 00:14:02.277 "tls_version": 0, 00:14:02.277 "enable_ktls": false 00:14:02.277 } 00:14:02.277 }, 00:14:02.277 { 00:14:02.277 "method": "sock_impl_set_options", 00:14:02.277 "params": { 00:14:02.277 "impl_name": "uring", 00:14:02.277 "recv_buf_size": 2097152, 00:14:02.277 "send_buf_size": 2097152, 00:14:02.277 "enable_recv_pipe": true, 00:14:02.277 "enable_quickack": false, 00:14:02.277 "enable_placement_id": 0, 00:14:02.277 "enable_zerocopy_send_server": false, 00:14:02.277 "enable_zerocopy_send_client": false, 00:14:02.277 "zerocopy_threshold": 0, 00:14:02.277 "tls_version": 0, 00:14:02.277 "enable_ktls": false 00:14:02.277 } 00:14:02.277 } 00:14:02.277 ] 00:14:02.277 }, 00:14:02.277 { 00:14:02.277 "subsystem": "vmd", 00:14:02.277 "config": [] 00:14:02.277 }, 00:14:02.277 { 00:14:02.277 "subsystem": "accel", 00:14:02.277 "config": [ 00:14:02.277 { 00:14:02.277 "method": "accel_set_options", 00:14:02.277 "params": { 00:14:02.277 "small_cache_size": 128, 00:14:02.277 "large_cache_size": 16, 00:14:02.277 "task_count": 2048, 00:14:02.277 "sequence_count": 2048, 00:14:02.277 "buf_count": 2048 00:14:02.277 } 00:14:02.277 } 00:14:02.277 ] 00:14:02.277 }, 00:14:02.277 { 00:14:02.277 "subsystem": "bdev", 00:14:02.277 "config": [ 00:14:02.277 { 00:14:02.277 "method": "bdev_set_options", 00:14:02.277 "params": { 00:14:02.277 "bdev_io_pool_size": 65535, 00:14:02.277 "bdev_io_cache_size": 256, 00:14:02.277 "bdev_auto_examine": true, 00:14:02.277 "iobuf_small_cache_size": 128, 00:14:02.277 "iobuf_large_cache_size": 16 00:14:02.277 } 00:14:02.277 }, 00:14:02.277 { 00:14:02.277 "method": "bdev_raid_set_options", 00:14:02.277 "params": { 00:14:02.277 "process_window_size_kb": 1024, 00:14:02.277 "process_max_bandwidth_mb_sec": 0 00:14:02.277 } 00:14:02.277 }, 00:14:02.277 { 00:14:02.277 "method": "bdev_iscsi_set_options", 00:14:02.277 "params": { 00:14:02.277 "timeout_sec": 30 00:14:02.277 } 00:14:02.277 }, 00:14:02.277 { 00:14:02.277 "method": "bdev_nvme_set_options", 00:14:02.277 "params": { 00:14:02.277 "action_on_timeout": "none", 00:14:02.277 "timeout_us": 0, 00:14:02.277 "timeout_admin_us": 0, 00:14:02.277 "keep_alive_timeout_ms": 10000, 00:14:02.277 "arbitration_burst": 0, 00:14:02.277 "low_priority_weight": 0, 00:14:02.277 "medium_priority_weight": 0, 00:14:02.277 "high_priority_weight": 0, 00:14:02.278 "nvme_adminq_poll_period_us": 10000, 00:14:02.278 "nvme_ioq_poll_period_us": 0, 00:14:02.278 "io_queue_requests": 512, 00:14:02.278 "delay_cmd_submit": true, 00:14:02.278 "transport_retry_count": 4, 00:14:02.278 "bdev_retry_count": 3, 00:14:02.278 "transport_ack_timeout": 0, 00:14:02.278 "ctrlr_loss_timeout_sec": 0, 00:14:02.278 "reconnect_delay_sec": 0, 00:14:02.278 "fast_io_fail_timeout_sec": 0, 00:14:02.278 "disable_auto_failback": false, 00:14:02.278 "generate_uuids": false, 00:14:02.278 "transport_tos": 0, 00:14:02.278 "nvme_error_stat": false, 00:14:02.278 "rdma_srq_size": 0, 00:14:02.278 "io_path_stat": false, 00:14:02.278 "allow_accel_sequence": false, 00:14:02.278 "rdma_max_cq_size": 0, 00:14:02.278 "rdma_cm_event_timeout_ms": 0, 00:14:02.278 "dhchap_digests": [ 00:14:02.278 "sha256", 00:14:02.278 "sha384", 00:14:02.278 "sha512" 00:14:02.278 ], 00:14:02.278 "dhchap_dhgroups": [ 00:14:02.278 "null", 00:14:02.278 "ffdhe2048", 00:14:02.278 "ffdhe3072", 00:14:02.278 "ffdhe4096", 00:14:02.278 "ffdhe6144", 00:14:02.278 "ffdhe8192" 00:14:02.278 ] 00:14:02.278 } 00:14:02.278 }, 00:14:02.278 { 00:14:02.278 "method": "bdev_nvme_attach_controller", 00:14:02.278 "params": { 00:14:02.278 "name": "nvme0", 00:14:02.278 "trtype": "TCP", 00:14:02.278 "adrfam": "IPv4", 00:14:02.278 "traddr": "10.0.0.2", 00:14:02.278 "trsvcid": "4420", 00:14:02.278 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:02.278 "prchk_reftag": false, 00:14:02.278 "prchk_guard": false, 00:14:02.278 "ctrlr_loss_timeout_sec": 0, 00:14:02.278 "reconnect_delay_sec": 0, 00:14:02.278 "fast_io_fail_timeout_sec": 0, 00:14:02.278 "psk": "key0", 00:14:02.278 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:02.278 "hdgst": false, 00:14:02.278 "ddgst": false 00:14:02.278 } 00:14:02.278 }, 00:14:02.278 { 00:14:02.278 "method": "bdev_nvme_set_hotplug", 00:14:02.278 "params": { 00:14:02.278 "period_us": 100000, 00:14:02.278 "enable": false 00:14:02.278 } 00:14:02.278 }, 00:14:02.278 { 00:14:02.278 "method": "bdev_enable_histogram", 00:14:02.278 "params": { 00:14:02.278 "name": "nvme0n1", 00:14:02.278 "enable": true 00:14:02.278 } 00:14:02.278 }, 00:14:02.278 { 00:14:02.278 "method": "bdev_wait_for_examine" 00:14:02.278 } 00:14:02.278 ] 00:14:02.278 }, 00:14:02.278 { 00:14:02.278 "subsystem": "nbd", 00:14:02.278 "config": [] 00:14:02.278 } 00:14:02.278 ] 00:14:02.278 }' 00:14:02.278 [2024-07-25 10:52:31.959265] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:02.278 [2024-07-25 10:52:31.959585] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73492 ] 00:14:02.536 [2024-07-25 10:52:32.097834] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.536 [2024-07-25 10:52:32.212172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.795 [2024-07-25 10:52:32.350446] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:02.795 [2024-07-25 10:52:32.396971] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:03.362 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:03.362 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:03.362 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:03.362 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:14:03.621 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:03.621 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:03.621 Running I/O for 1 seconds... 00:14:04.563 00:14:04.563 Latency(us) 00:14:04.563 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:04.563 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:04.563 Verification LBA range: start 0x0 length 0x2000 00:14:04.563 nvme0n1 : 1.01 3968.62 15.50 0.00 0.00 32027.23 3321.48 35746.91 00:14:04.564 =================================================================================================================== 00:14:04.564 Total : 3968.62 15.50 0.00 0.00 32027.23 3321.48 35746.91 00:14:04.564 0 00:14:04.564 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:14:04.564 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:14:04.564 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:14:04.564 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:14:04.564 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:14:04.564 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:14:04.564 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:04.564 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:14:04.564 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:14:04.564 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:14:04.564 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:04.823 nvmf_trace.0 00:14:04.823 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:14:04.823 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 73492 00:14:04.823 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73492 ']' 00:14:04.823 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73492 00:14:04.823 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:04.823 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:04.823 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73492 00:14:04.823 killing process with pid 73492 00:14:04.823 Received shutdown signal, test time was about 1.000000 seconds 00:14:04.823 00:14:04.823 Latency(us) 00:14:04.823 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:04.823 =================================================================================================================== 00:14:04.823 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:04.823 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:04.823 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:04.823 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73492' 00:14:04.823 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73492 00:14:04.823 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73492 00:14:05.082 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:14:05.082 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:05.082 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:14:05.082 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:05.082 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:14:05.082 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:05.082 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:05.082 rmmod nvme_tcp 00:14:05.082 rmmod nvme_fabrics 00:14:05.082 rmmod nvme_keyring 00:14:05.082 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:05.082 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:14:05.082 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:14:05.082 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 73460 ']' 00:14:05.082 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 73460 00:14:05.082 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73460 ']' 00:14:05.082 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73460 00:14:05.082 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:05.082 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:05.082 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73460 00:14:05.082 killing process with pid 73460 00:14:05.082 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:05.082 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:05.082 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73460' 00:14:05.082 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73460 00:14:05.082 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73460 00:14:05.341 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:05.341 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:05.341 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:05.341 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:05.341 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:05.341 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.341 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:05.341 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.341 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:05.601 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.qNPoqiRoUl /tmp/tmp.6Nd8X6u7xh /tmp/tmp.1sgGCHDUzq 00:14:05.601 ************************************ 00:14:05.601 END TEST nvmf_tls 00:14:05.601 ************************************ 00:14:05.601 00:14:05.601 real 1m27.434s 00:14:05.601 user 2m17.720s 00:14:05.601 sys 0m28.848s 00:14:05.601 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:05.601 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:05.601 10:52:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:05.601 10:52:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:05.601 10:52:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:05.601 10:52:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:05.601 ************************************ 00:14:05.601 START TEST nvmf_fips 00:14:05.601 ************************************ 00:14:05.601 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:05.601 * Looking for test storage... 00:14:05.601 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:14:05.601 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:05.601 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:14:05.601 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:05.601 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:05.601 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:05.601 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:05.601 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:05.601 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:05.601 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:05.601 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:05.601 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:05.601 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:05.601 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:14:05.601 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=bb4b8bd3-cfb4-4368-bf29-91254747069c 00:14:05.601 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:05.601 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:05.601 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:05.601 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:05.601 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:14:05.602 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:14:05.862 Error setting digest 00:14:05.862 00127451067F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:14:05.862 00127451067F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:05.862 Cannot find device "nvmf_tgt_br" 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # true 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:05.862 Cannot find device "nvmf_tgt_br2" 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # true 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:05.862 Cannot find device "nvmf_tgt_br" 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # true 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:05.862 Cannot find device "nvmf_tgt_br2" 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # true 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:05.862 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:05.862 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:05.862 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:06.123 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:06.123 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:06.123 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:06.123 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:06.123 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:06.123 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:06.123 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:06.123 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:06.123 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:06.123 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:06.123 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:06.123 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:06.123 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:06.123 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:06.123 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:06.123 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:06.123 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:06.123 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:06.123 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:06.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:06.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:14:06.123 00:14:06.123 --- 10.0.0.2 ping statistics --- 00:14:06.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.123 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:14:06.123 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:06.123 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:06.123 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:14:06.123 00:14:06.123 --- 10.0.0.3 ping statistics --- 00:14:06.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.123 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:14:06.123 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:06.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:06.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:14:06.123 00:14:06.123 --- 10.0.0.1 ping statistics --- 00:14:06.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.123 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:14:06.123 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:06.123 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:14:06.123 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:06.123 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:06.123 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:06.123 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:06.123 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:06.123 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:06.123 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:06.123 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:14:06.123 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:06.123 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:06.123 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:06.123 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=73758 00:14:06.123 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 73758 00:14:06.123 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:06.123 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 73758 ']' 00:14:06.123 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.123 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:06.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.123 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.123 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:06.123 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:06.123 [2024-07-25 10:52:35.856433] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:06.123 [2024-07-25 10:52:35.856532] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:06.387 [2024-07-25 10:52:36.001002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.647 [2024-07-25 10:52:36.132986] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:06.647 [2024-07-25 10:52:36.133040] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:06.647 [2024-07-25 10:52:36.133054] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:06.647 [2024-07-25 10:52:36.133064] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:06.647 [2024-07-25 10:52:36.133074] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:06.647 [2024-07-25 10:52:36.133107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.647 [2024-07-25 10:52:36.192208] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:07.218 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:07.218 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:14:07.218 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:07.218 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:07.218 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:07.218 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:07.218 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:14:07.218 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:07.218 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:07.218 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:07.218 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:07.218 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:07.218 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:07.218 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:07.476 [2024-07-25 10:52:37.176751] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:07.476 [2024-07-25 10:52:37.192686] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:07.476 [2024-07-25 10:52:37.192983] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:07.735 [2024-07-25 10:52:37.224941] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:07.735 malloc0 00:14:07.735 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:07.735 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=73799 00:14:07.735 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:07.735 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 73799 /var/tmp/bdevperf.sock 00:14:07.735 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 73799 ']' 00:14:07.735 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:07.735 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:07.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:07.735 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:07.735 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:07.735 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:07.735 [2024-07-25 10:52:37.342446] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:07.735 [2024-07-25 10:52:37.342575] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73799 ] 00:14:07.993 [2024-07-25 10:52:37.484358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.993 [2024-07-25 10:52:37.602503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:07.993 [2024-07-25 10:52:37.657589] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:08.563 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:08.563 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:14:08.563 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:08.833 [2024-07-25 10:52:38.535876] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:08.833 [2024-07-25 10:52:38.536019] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:09.091 TLSTESTn1 00:14:09.091 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:09.091 Running I/O for 10 seconds... 00:14:19.070 00:14:19.070 Latency(us) 00:14:19.070 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:19.070 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:19.070 Verification LBA range: start 0x0 length 0x2000 00:14:19.070 TLSTESTn1 : 10.02 3916.75 15.30 0.00 0.00 32618.61 6851.49 34317.03 00:14:19.070 =================================================================================================================== 00:14:19.070 Total : 3916.75 15.30 0.00 0.00 32618.61 6851.49 34317.03 00:14:19.070 0 00:14:19.070 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:14:19.070 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:14:19.070 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:14:19.070 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:14:19.070 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:14:19.070 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:19.070 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:14:19.070 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:14:19.070 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:14:19.070 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:19.070 nvmf_trace.0 00:14:19.329 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:14:19.329 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 73799 00:14:19.329 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 73799 ']' 00:14:19.329 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 73799 00:14:19.329 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:14:19.329 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:19.329 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73799 00:14:19.329 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:19.329 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:19.329 killing process with pid 73799 00:14:19.329 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73799' 00:14:19.329 Received shutdown signal, test time was about 10.000000 seconds 00:14:19.329 00:14:19.329 Latency(us) 00:14:19.329 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:19.329 =================================================================================================================== 00:14:19.329 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:19.329 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 73799 00:14:19.329 [2024-07-25 10:52:48.895179] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:19.329 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 73799 00:14:19.588 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:14:19.588 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:19.588 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:14:19.588 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:19.588 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:14:19.588 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:19.588 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:19.588 rmmod nvme_tcp 00:14:19.847 rmmod nvme_fabrics 00:14:19.847 rmmod nvme_keyring 00:14:19.847 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:19.847 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:14:19.847 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:14:19.847 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 73758 ']' 00:14:19.847 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 73758 00:14:19.847 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 73758 ']' 00:14:19.847 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 73758 00:14:19.847 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:14:19.847 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:19.847 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73758 00:14:19.847 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:19.847 killing process with pid 73758 00:14:19.847 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:19.847 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73758' 00:14:19.847 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 73758 00:14:19.847 [2024-07-25 10:52:49.395249] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:19.847 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 73758 00:14:20.106 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:20.106 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:20.106 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:20.106 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:20.106 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:20.106 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:20.106 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:20.106 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:20.106 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:20.106 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:20.106 00:14:20.106 real 0m14.536s 00:14:20.106 user 0m19.725s 00:14:20.106 sys 0m5.938s 00:14:20.106 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:20.106 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:20.106 ************************************ 00:14:20.106 END TEST nvmf_fips 00:14:20.106 ************************************ 00:14:20.106 10:52:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:14:20.106 10:52:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ virt == phy ]] 00:14:20.106 10:52:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:14:20.106 00:14:20.106 real 4m37.058s 00:14:20.106 user 9m37.145s 00:14:20.106 sys 1m3.433s 00:14:20.106 10:52:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:20.106 ************************************ 00:14:20.106 10:52:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:20.106 END TEST nvmf_target_extra 00:14:20.106 ************************************ 00:14:20.106 10:52:49 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:14:20.106 10:52:49 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:20.106 10:52:49 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:20.106 10:52:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:20.106 ************************************ 00:14:20.106 START TEST nvmf_host 00:14:20.107 ************************************ 00:14:20.107 10:52:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:14:20.367 * Looking for test storage... 00:14:20.367 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=bb4b8bd3-cfb4-4368-bf29-91254747069c 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:20.367 ************************************ 00:14:20.367 START TEST nvmf_identify 00:14:20.367 ************************************ 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:20.367 * Looking for test storage... 00:14:20.367 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=bb4b8bd3-cfb4-4368-bf29-91254747069c 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:20.367 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:20.368 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:20.368 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:14:20.368 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:20.368 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:20.368 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:20.368 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:20.368 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:20.368 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:20.368 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:20.368 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:20.368 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:20.368 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:20.368 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:20.368 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:20.368 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:20.368 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:20.368 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:20.368 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:20.368 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:20.368 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:20.368 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:20.368 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:20.368 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:20.368 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:20.368 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:20.368 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:20.368 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:20.368 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:20.368 10:52:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:20.368 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:20.368 Cannot find device "nvmf_tgt_br" 00:14:20.368 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # true 00:14:20.368 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:20.368 Cannot find device "nvmf_tgt_br2" 00:14:20.368 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # true 00:14:20.368 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:20.368 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:20.368 Cannot find device "nvmf_tgt_br" 00:14:20.368 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # true 00:14:20.368 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:20.368 Cannot find device "nvmf_tgt_br2" 00:14:20.368 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # true 00:14:20.368 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:20.627 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:20.627 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:20.627 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:20.627 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:14:20.627 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:20.627 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:20.627 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:14:20.627 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:20.627 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:20.627 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:20.627 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:20.627 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:20.627 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:20.627 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:20.627 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:20.627 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:20.627 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:20.627 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:20.627 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:20.627 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:20.627 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:20.627 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:20.627 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:20.627 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:20.627 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:20.627 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:20.627 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:20.627 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:20.627 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:20.627 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:20.627 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:20.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:20.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:14:20.627 00:14:20.627 --- 10.0.0.2 ping statistics --- 00:14:20.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.627 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:14:20.627 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:20.627 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:20.627 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.031 ms 00:14:20.627 00:14:20.627 --- 10.0.0.3 ping statistics --- 00:14:20.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.627 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:14:20.627 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:20.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:20.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:14:20.627 00:14:20.627 --- 10.0.0.1 ping statistics --- 00:14:20.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.627 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:14:20.627 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:20.627 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:14:20.627 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:20.627 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:20.627 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:20.627 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:20.627 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:20.627 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:20.627 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:20.627 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:14:20.627 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:20.627 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:20.628 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74179 00:14:20.628 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:20.628 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:20.628 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74179 00:14:20.628 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 74179 ']' 00:14:20.628 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.628 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:20.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.628 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.628 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:20.628 10:52:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:20.886 [2024-07-25 10:52:50.403459] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:20.886 [2024-07-25 10:52:50.403578] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:20.886 [2024-07-25 10:52:50.538853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:21.145 [2024-07-25 10:52:50.642195] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:21.145 [2024-07-25 10:52:50.642250] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:21.145 [2024-07-25 10:52:50.642261] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:21.145 [2024-07-25 10:52:50.642270] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:21.145 [2024-07-25 10:52:50.642278] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:21.145 [2024-07-25 10:52:50.642476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.145 [2024-07-25 10:52:50.642640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:21.145 [2024-07-25 10:52:50.643078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:21.145 [2024-07-25 10:52:50.643083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.145 [2024-07-25 10:52:50.695559] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:21.712 10:52:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:21.712 10:52:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:14:21.712 10:52:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:21.712 10:52:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.712 10:52:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:21.712 [2024-07-25 10:52:51.377969] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:21.712 10:52:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.712 10:52:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:14:21.712 10:52:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:21.712 10:52:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:21.712 10:52:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:21.712 10:52:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.712 10:52:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:21.982 Malloc0 00:14:21.982 10:52:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.982 10:52:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:21.982 10:52:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.982 10:52:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:21.982 10:52:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.982 10:52:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:14:21.982 10:52:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.982 10:52:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:21.982 10:52:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.982 10:52:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:21.982 10:52:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.982 10:52:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:21.982 [2024-07-25 10:52:51.483341] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:21.982 10:52:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.982 10:52:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:21.982 10:52:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.982 10:52:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:21.982 10:52:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.982 10:52:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:14:21.982 10:52:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.982 10:52:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:21.982 [ 00:14:21.982 { 00:14:21.982 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:21.982 "subtype": "Discovery", 00:14:21.982 "listen_addresses": [ 00:14:21.982 { 00:14:21.982 "trtype": "TCP", 00:14:21.982 "adrfam": "IPv4", 00:14:21.982 "traddr": "10.0.0.2", 00:14:21.982 "trsvcid": "4420" 00:14:21.982 } 00:14:21.982 ], 00:14:21.982 "allow_any_host": true, 00:14:21.982 "hosts": [] 00:14:21.982 }, 00:14:21.982 { 00:14:21.982 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.982 "subtype": "NVMe", 00:14:21.982 "listen_addresses": [ 00:14:21.982 { 00:14:21.982 "trtype": "TCP", 00:14:21.982 "adrfam": "IPv4", 00:14:21.982 "traddr": "10.0.0.2", 00:14:21.982 "trsvcid": "4420" 00:14:21.982 } 00:14:21.982 ], 00:14:21.982 "allow_any_host": true, 00:14:21.982 "hosts": [], 00:14:21.982 "serial_number": "SPDK00000000000001", 00:14:21.982 "model_number": "SPDK bdev Controller", 00:14:21.982 "max_namespaces": 32, 00:14:21.982 "min_cntlid": 1, 00:14:21.982 "max_cntlid": 65519, 00:14:21.982 "namespaces": [ 00:14:21.982 { 00:14:21.982 "nsid": 1, 00:14:21.982 "bdev_name": "Malloc0", 00:14:21.982 "name": "Malloc0", 00:14:21.982 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:14:21.982 "eui64": "ABCDEF0123456789", 00:14:21.982 "uuid": "f03e8d3f-5761-47ac-947a-8e59bcaebd65" 00:14:21.982 } 00:14:21.982 ] 00:14:21.982 } 00:14:21.982 ] 00:14:21.982 10:52:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.982 10:52:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:14:21.982 [2024-07-25 10:52:51.539951] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:21.982 [2024-07-25 10:52:51.539998] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74214 ] 00:14:21.982 [2024-07-25 10:52:51.675766] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:14:21.982 [2024-07-25 10:52:51.675852] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:21.982 [2024-07-25 10:52:51.675859] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:21.982 [2024-07-25 10:52:51.675882] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:21.982 [2024-07-25 10:52:51.675894] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:21.982 [2024-07-25 10:52:51.676052] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:14:21.982 [2024-07-25 10:52:51.676102] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x13802c0 0 00:14:21.982 [2024-07-25 10:52:51.688951] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:21.982 [2024-07-25 10:52:51.688976] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:21.982 [2024-07-25 10:52:51.688983] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:21.982 [2024-07-25 10:52:51.688987] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:21.982 [2024-07-25 10:52:51.689035] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.982 [2024-07-25 10:52:51.689042] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.982 [2024-07-25 10:52:51.689047] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13802c0) 00:14:21.982 [2024-07-25 10:52:51.689062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:21.982 [2024-07-25 10:52:51.689093] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1940, cid 0, qid 0 00:14:21.982 [2024-07-25 10:52:51.696936] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.983 [2024-07-25 10:52:51.696958] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.983 [2024-07-25 10:52:51.696964] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.983 [2024-07-25 10:52:51.696969] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1940) on tqpair=0x13802c0 00:14:21.983 [2024-07-25 10:52:51.696988] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:21.983 [2024-07-25 10:52:51.696997] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:14:21.983 [2024-07-25 10:52:51.697003] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:14:21.983 [2024-07-25 10:52:51.697022] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.983 [2024-07-25 10:52:51.697028] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.983 [2024-07-25 10:52:51.697032] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13802c0) 00:14:21.983 [2024-07-25 10:52:51.697042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.983 [2024-07-25 10:52:51.697071] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1940, cid 0, qid 0 00:14:21.983 [2024-07-25 10:52:51.697128] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.983 [2024-07-25 10:52:51.697135] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.983 [2024-07-25 10:52:51.697139] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.983 [2024-07-25 10:52:51.697143] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1940) on tqpair=0x13802c0 00:14:21.983 [2024-07-25 10:52:51.697149] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:14:21.983 [2024-07-25 10:52:51.697157] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:14:21.983 [2024-07-25 10:52:51.697165] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.983 [2024-07-25 10:52:51.697169] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.983 [2024-07-25 10:52:51.697173] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13802c0) 00:14:21.983 [2024-07-25 10:52:51.697180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.983 [2024-07-25 10:52:51.697199] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1940, cid 0, qid 0 00:14:21.983 [2024-07-25 10:52:51.697253] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.983 [2024-07-25 10:52:51.697259] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.983 [2024-07-25 10:52:51.697264] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.983 [2024-07-25 10:52:51.697268] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1940) on tqpair=0x13802c0 00:14:21.983 [2024-07-25 10:52:51.697274] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:14:21.983 [2024-07-25 10:52:51.697283] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:14:21.983 [2024-07-25 10:52:51.697291] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.983 [2024-07-25 10:52:51.697295] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.983 [2024-07-25 10:52:51.697305] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13802c0) 00:14:21.983 [2024-07-25 10:52:51.697313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.983 [2024-07-25 10:52:51.697330] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1940, cid 0, qid 0 00:14:21.983 [2024-07-25 10:52:51.697377] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.983 [2024-07-25 10:52:51.697384] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.983 [2024-07-25 10:52:51.697388] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.983 [2024-07-25 10:52:51.697392] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1940) on tqpair=0x13802c0 00:14:21.983 [2024-07-25 10:52:51.697398] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:21.983 [2024-07-25 10:52:51.697408] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.983 [2024-07-25 10:52:51.697413] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.983 [2024-07-25 10:52:51.697416] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13802c0) 00:14:21.983 [2024-07-25 10:52:51.697424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.983 [2024-07-25 10:52:51.697441] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1940, cid 0, qid 0 00:14:21.983 [2024-07-25 10:52:51.697488] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.983 [2024-07-25 10:52:51.697495] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.983 [2024-07-25 10:52:51.697499] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.983 [2024-07-25 10:52:51.697503] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1940) on tqpair=0x13802c0 00:14:21.983 [2024-07-25 10:52:51.697508] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:14:21.983 [2024-07-25 10:52:51.697513] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:14:21.983 [2024-07-25 10:52:51.697521] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:21.983 [2024-07-25 10:52:51.697628] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:14:21.983 [2024-07-25 10:52:51.697633] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:21.983 [2024-07-25 10:52:51.697643] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.983 [2024-07-25 10:52:51.697648] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.983 [2024-07-25 10:52:51.697651] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13802c0) 00:14:21.983 [2024-07-25 10:52:51.697659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.983 [2024-07-25 10:52:51.697677] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1940, cid 0, qid 0 00:14:21.983 [2024-07-25 10:52:51.697728] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.983 [2024-07-25 10:52:51.697735] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.983 [2024-07-25 10:52:51.697739] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.983 [2024-07-25 10:52:51.697743] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1940) on tqpair=0x13802c0 00:14:21.983 [2024-07-25 10:52:51.697749] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:21.983 [2024-07-25 10:52:51.697759] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.983 [2024-07-25 10:52:51.697763] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.983 [2024-07-25 10:52:51.697767] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13802c0) 00:14:21.983 [2024-07-25 10:52:51.697774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.983 [2024-07-25 10:52:51.697791] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1940, cid 0, qid 0 00:14:21.983 [2024-07-25 10:52:51.697842] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.983 [2024-07-25 10:52:51.697849] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.983 [2024-07-25 10:52:51.697878] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.983 [2024-07-25 10:52:51.697883] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1940) on tqpair=0x13802c0 00:14:21.983 [2024-07-25 10:52:51.697888] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:21.983 [2024-07-25 10:52:51.697893] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:14:21.983 [2024-07-25 10:52:51.697903] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:14:21.983 [2024-07-25 10:52:51.697913] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:14:21.983 [2024-07-25 10:52:51.697925] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.983 [2024-07-25 10:52:51.697930] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13802c0) 00:14:21.983 [2024-07-25 10:52:51.697938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.983 [2024-07-25 10:52:51.697959] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1940, cid 0, qid 0 00:14:21.983 [2024-07-25 10:52:51.698082] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:21.983 [2024-07-25 10:52:51.698090] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:21.983 [2024-07-25 10:52:51.698094] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:21.983 [2024-07-25 10:52:51.698098] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13802c0): datao=0, datal=4096, cccid=0 00:14:21.983 [2024-07-25 10:52:51.698104] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13c1940) on tqpair(0x13802c0): expected_datao=0, payload_size=4096 00:14:21.983 [2024-07-25 10:52:51.698109] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.983 [2024-07-25 10:52:51.698118] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:21.983 [2024-07-25 10:52:51.698122] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:21.983 [2024-07-25 10:52:51.698132] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.983 [2024-07-25 10:52:51.698138] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.983 [2024-07-25 10:52:51.698142] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.983 [2024-07-25 10:52:51.698146] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1940) on tqpair=0x13802c0 00:14:21.984 [2024-07-25 10:52:51.698155] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:14:21.984 [2024-07-25 10:52:51.698160] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:14:21.984 [2024-07-25 10:52:51.698165] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:14:21.984 [2024-07-25 10:52:51.698175] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:14:21.984 [2024-07-25 10:52:51.698181] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:14:21.984 [2024-07-25 10:52:51.698186] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:14:21.984 [2024-07-25 10:52:51.698195] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:14:21.984 [2024-07-25 10:52:51.698204] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.984 [2024-07-25 10:52:51.698208] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.984 [2024-07-25 10:52:51.698212] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13802c0) 00:14:21.984 [2024-07-25 10:52:51.698220] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:21.984 [2024-07-25 10:52:51.698238] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1940, cid 0, qid 0 00:14:21.984 [2024-07-25 10:52:51.698306] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.984 [2024-07-25 10:52:51.698313] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.984 [2024-07-25 10:52:51.698317] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.984 [2024-07-25 10:52:51.698321] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1940) on tqpair=0x13802c0 00:14:21.984 [2024-07-25 10:52:51.698330] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.984 [2024-07-25 10:52:51.698334] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.984 [2024-07-25 10:52:51.698338] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13802c0) 00:14:21.984 [2024-07-25 10:52:51.698345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:21.984 [2024-07-25 10:52:51.698351] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.984 [2024-07-25 10:52:51.698355] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.984 [2024-07-25 10:52:51.698359] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x13802c0) 00:14:21.984 [2024-07-25 10:52:51.698365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:21.984 [2024-07-25 10:52:51.698371] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.984 [2024-07-25 10:52:51.698383] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.984 [2024-07-25 10:52:51.698387] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x13802c0) 00:14:21.984 [2024-07-25 10:52:51.698393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:21.984 [2024-07-25 10:52:51.698399] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.984 [2024-07-25 10:52:51.698403] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.984 [2024-07-25 10:52:51.698406] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13802c0) 00:14:21.984 [2024-07-25 10:52:51.698412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:21.984 [2024-07-25 10:52:51.698418] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:14:21.984 [2024-07-25 10:52:51.698427] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:21.984 [2024-07-25 10:52:51.698434] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.984 [2024-07-25 10:52:51.698438] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13802c0) 00:14:21.984 [2024-07-25 10:52:51.698445] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.984 [2024-07-25 10:52:51.698469] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1940, cid 0, qid 0 00:14:21.984 [2024-07-25 10:52:51.698477] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1ac0, cid 1, qid 0 00:14:21.984 [2024-07-25 10:52:51.698482] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1c40, cid 2, qid 0 00:14:21.984 [2024-07-25 10:52:51.698487] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1dc0, cid 3, qid 0 00:14:21.984 [2024-07-25 10:52:51.698491] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1f40, cid 4, qid 0 00:14:21.984 [2024-07-25 10:52:51.698576] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.984 [2024-07-25 10:52:51.698583] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.984 [2024-07-25 10:52:51.698586] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.984 [2024-07-25 10:52:51.698590] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1f40) on tqpair=0x13802c0 00:14:21.984 [2024-07-25 10:52:51.698596] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:14:21.984 [2024-07-25 10:52:51.698602] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:14:21.984 [2024-07-25 10:52:51.698613] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.984 [2024-07-25 10:52:51.698618] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13802c0) 00:14:21.984 [2024-07-25 10:52:51.698625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.984 [2024-07-25 10:52:51.698643] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1f40, cid 4, qid 0 00:14:21.984 [2024-07-25 10:52:51.698702] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:21.984 [2024-07-25 10:52:51.698709] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:21.984 [2024-07-25 10:52:51.698712] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:21.984 [2024-07-25 10:52:51.698716] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13802c0): datao=0, datal=4096, cccid=4 00:14:21.984 [2024-07-25 10:52:51.698721] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13c1f40) on tqpair(0x13802c0): expected_datao=0, payload_size=4096 00:14:21.984 [2024-07-25 10:52:51.698726] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.984 [2024-07-25 10:52:51.698733] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:21.984 [2024-07-25 10:52:51.698737] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:21.984 [2024-07-25 10:52:51.698745] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.984 [2024-07-25 10:52:51.698752] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.984 [2024-07-25 10:52:51.698755] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.984 [2024-07-25 10:52:51.698759] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1f40) on tqpair=0x13802c0 00:14:21.984 [2024-07-25 10:52:51.698773] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:14:21.984 [2024-07-25 10:52:51.698799] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.984 [2024-07-25 10:52:51.698805] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13802c0) 00:14:21.984 [2024-07-25 10:52:51.698813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.984 [2024-07-25 10:52:51.698820] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.984 [2024-07-25 10:52:51.698825] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.984 [2024-07-25 10:52:51.698828] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13802c0) 00:14:21.984 [2024-07-25 10:52:51.698835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:21.984 [2024-07-25 10:52:51.698871] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1f40, cid 4, qid 0 00:14:21.984 [2024-07-25 10:52:51.698880] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c20c0, cid 5, qid 0 00:14:21.984 [2024-07-25 10:52:51.698998] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:21.984 [2024-07-25 10:52:51.699006] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:21.984 [2024-07-25 10:52:51.699009] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:21.984 [2024-07-25 10:52:51.699013] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13802c0): datao=0, datal=1024, cccid=4 00:14:21.984 [2024-07-25 10:52:51.699018] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13c1f40) on tqpair(0x13802c0): expected_datao=0, payload_size=1024 00:14:21.984 [2024-07-25 10:52:51.699022] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.984 [2024-07-25 10:52:51.699029] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:21.984 [2024-07-25 10:52:51.699033] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:21.984 [2024-07-25 10:52:51.699039] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.984 [2024-07-25 10:52:51.699045] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.984 [2024-07-25 10:52:51.699049] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.985 [2024-07-25 10:52:51.699053] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c20c0) on tqpair=0x13802c0 00:14:21.985 [2024-07-25 10:52:51.699071] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.985 [2024-07-25 10:52:51.699078] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.985 [2024-07-25 10:52:51.699082] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.985 [2024-07-25 10:52:51.699086] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1f40) on tqpair=0x13802c0 00:14:21.985 [2024-07-25 10:52:51.699099] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.985 [2024-07-25 10:52:51.699104] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13802c0) 00:14:21.985 [2024-07-25 10:52:51.699111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.985 [2024-07-25 10:52:51.699135] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1f40, cid 4, qid 0 00:14:21.985 [2024-07-25 10:52:51.699211] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:21.985 [2024-07-25 10:52:51.699218] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:21.985 [2024-07-25 10:52:51.699221] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:21.985 [2024-07-25 10:52:51.699225] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13802c0): datao=0, datal=3072, cccid=4 00:14:21.985 [2024-07-25 10:52:51.699230] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13c1f40) on tqpair(0x13802c0): expected_datao=0, payload_size=3072 00:14:21.985 [2024-07-25 10:52:51.699235] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.985 [2024-07-25 10:52:51.699242] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:21.985 [2024-07-25 10:52:51.699246] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:21.985 [2024-07-25 10:52:51.699254] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.985 [2024-07-25 10:52:51.699260] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.985 [2024-07-25 10:52:51.699263] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.985 [2024-07-25 10:52:51.699267] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1f40) on tqpair=0x13802c0 00:14:21.985 [2024-07-25 10:52:51.699278] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.985 [2024-07-25 10:52:51.699282] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13802c0) 00:14:21.985 [2024-07-25 10:52:51.699289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.985 [2024-07-25 10:52:51.699312] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1f40, cid 4, qid 0 00:14:21.985 [2024-07-25 10:52:51.699376] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:21.985 [2024-07-25 10:52:51.699383] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:21.985 [2024-07-25 10:52:51.699386] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:21.985 [2024-07-25 10:52:51.699390] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13802c0): datao=0, datal=8, cccid=4 00:14:21.985 [2024-07-25 10:52:51.699395] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13c1f40) on tqpair(0x13802c0): expected_datao=0, payload_size=8 00:14:21.985 [2024-07-25 10:52:51.699399] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.985 [2024-07-25 10:52:51.699406] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:21.985 [2024-07-25 10:52:51.699410] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:21.985 [2024-07-25 10:52:51.699425] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.985 [2024-07-25 10:52:51.699432] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.985 [2024-07-25 10:52:51.699436] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.985 [2024-07-25 10:52:51.699440] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1f40) on tqpair=0x13802c0 00:14:21.985 ===================================================== 00:14:21.985 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:14:21.985 ===================================================== 00:14:21.985 Controller Capabilities/Features 00:14:21.985 ================================ 00:14:21.985 Vendor ID: 0000 00:14:21.985 Subsystem Vendor ID: 0000 00:14:21.985 Serial Number: .................... 00:14:21.985 Model Number: ........................................ 00:14:21.985 Firmware Version: 24.09 00:14:21.985 Recommended Arb Burst: 0 00:14:21.985 IEEE OUI Identifier: 00 00 00 00:14:21.985 Multi-path I/O 00:14:21.985 May have multiple subsystem ports: No 00:14:21.985 May have multiple controllers: No 00:14:21.985 Associated with SR-IOV VF: No 00:14:21.985 Max Data Transfer Size: 131072 00:14:21.985 Max Number of Namespaces: 0 00:14:21.985 Max Number of I/O Queues: 1024 00:14:21.985 NVMe Specification Version (VS): 1.3 00:14:21.985 NVMe Specification Version (Identify): 1.3 00:14:21.985 Maximum Queue Entries: 128 00:14:21.985 Contiguous Queues Required: Yes 00:14:21.985 Arbitration Mechanisms Supported 00:14:21.985 Weighted Round Robin: Not Supported 00:14:21.985 Vendor Specific: Not Supported 00:14:21.985 Reset Timeout: 15000 ms 00:14:21.985 Doorbell Stride: 4 bytes 00:14:21.985 NVM Subsystem Reset: Not Supported 00:14:21.985 Command Sets Supported 00:14:21.985 NVM Command Set: Supported 00:14:21.985 Boot Partition: Not Supported 00:14:21.985 Memory Page Size Minimum: 4096 bytes 00:14:21.985 Memory Page Size Maximum: 4096 bytes 00:14:21.985 Persistent Memory Region: Not Supported 00:14:21.985 Optional Asynchronous Events Supported 00:14:21.985 Namespace Attribute Notices: Not Supported 00:14:21.985 Firmware Activation Notices: Not Supported 00:14:21.985 ANA Change Notices: Not Supported 00:14:21.985 PLE Aggregate Log Change Notices: Not Supported 00:14:21.985 LBA Status Info Alert Notices: Not Supported 00:14:21.985 EGE Aggregate Log Change Notices: Not Supported 00:14:21.985 Normal NVM Subsystem Shutdown event: Not Supported 00:14:21.985 Zone Descriptor Change Notices: Not Supported 00:14:21.985 Discovery Log Change Notices: Supported 00:14:21.985 Controller Attributes 00:14:21.985 128-bit Host Identifier: Not Supported 00:14:21.985 Non-Operational Permissive Mode: Not Supported 00:14:21.985 NVM Sets: Not Supported 00:14:21.985 Read Recovery Levels: Not Supported 00:14:21.985 Endurance Groups: Not Supported 00:14:21.985 Predictable Latency Mode: Not Supported 00:14:21.985 Traffic Based Keep ALive: Not Supported 00:14:21.985 Namespace Granularity: Not Supported 00:14:21.985 SQ Associations: Not Supported 00:14:21.985 UUID List: Not Supported 00:14:21.985 Multi-Domain Subsystem: Not Supported 00:14:21.985 Fixed Capacity Management: Not Supported 00:14:21.985 Variable Capacity Management: Not Supported 00:14:21.985 Delete Endurance Group: Not Supported 00:14:21.985 Delete NVM Set: Not Supported 00:14:21.985 Extended LBA Formats Supported: Not Supported 00:14:21.985 Flexible Data Placement Supported: Not Supported 00:14:21.985 00:14:21.985 Controller Memory Buffer Support 00:14:21.985 ================================ 00:14:21.985 Supported: No 00:14:21.985 00:14:21.985 Persistent Memory Region Support 00:14:21.985 ================================ 00:14:21.985 Supported: No 00:14:21.985 00:14:21.985 Admin Command Set Attributes 00:14:21.985 ============================ 00:14:21.985 Security Send/Receive: Not Supported 00:14:21.985 Format NVM: Not Supported 00:14:21.985 Firmware Activate/Download: Not Supported 00:14:21.985 Namespace Management: Not Supported 00:14:21.985 Device Self-Test: Not Supported 00:14:21.985 Directives: Not Supported 00:14:21.985 NVMe-MI: Not Supported 00:14:21.985 Virtualization Management: Not Supported 00:14:21.985 Doorbell Buffer Config: Not Supported 00:14:21.985 Get LBA Status Capability: Not Supported 00:14:21.985 Command & Feature Lockdown Capability: Not Supported 00:14:21.985 Abort Command Limit: 1 00:14:21.985 Async Event Request Limit: 4 00:14:21.985 Number of Firmware Slots: N/A 00:14:21.985 Firmware Slot 1 Read-Only: N/A 00:14:21.985 Firmware Activation Without Reset: N/A 00:14:21.985 Multiple Update Detection Support: N/A 00:14:21.985 Firmware Update Granularity: No Information Provided 00:14:21.985 Per-Namespace SMART Log: No 00:14:21.985 Asymmetric Namespace Access Log Page: Not Supported 00:14:21.985 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:14:21.985 Command Effects Log Page: Not Supported 00:14:21.985 Get Log Page Extended Data: Supported 00:14:21.985 Telemetry Log Pages: Not Supported 00:14:21.985 Persistent Event Log Pages: Not Supported 00:14:21.985 Supported Log Pages Log Page: May Support 00:14:21.985 Commands Supported & Effects Log Page: Not Supported 00:14:21.985 Feature Identifiers & Effects Log Page:May Support 00:14:21.985 NVMe-MI Commands & Effects Log Page: May Support 00:14:21.985 Data Area 4 for Telemetry Log: Not Supported 00:14:21.986 Error Log Page Entries Supported: 128 00:14:21.986 Keep Alive: Not Supported 00:14:21.986 00:14:21.986 NVM Command Set Attributes 00:14:21.986 ========================== 00:14:21.986 Submission Queue Entry Size 00:14:21.986 Max: 1 00:14:21.986 Min: 1 00:14:21.986 Completion Queue Entry Size 00:14:21.986 Max: 1 00:14:21.986 Min: 1 00:14:21.986 Number of Namespaces: 0 00:14:21.986 Compare Command: Not Supported 00:14:21.986 Write Uncorrectable Command: Not Supported 00:14:21.986 Dataset Management Command: Not Supported 00:14:21.986 Write Zeroes Command: Not Supported 00:14:21.986 Set Features Save Field: Not Supported 00:14:21.986 Reservations: Not Supported 00:14:21.986 Timestamp: Not Supported 00:14:21.986 Copy: Not Supported 00:14:21.986 Volatile Write Cache: Not Present 00:14:21.986 Atomic Write Unit (Normal): 1 00:14:21.986 Atomic Write Unit (PFail): 1 00:14:21.986 Atomic Compare & Write Unit: 1 00:14:21.986 Fused Compare & Write: Supported 00:14:21.986 Scatter-Gather List 00:14:21.986 SGL Command Set: Supported 00:14:21.986 SGL Keyed: Supported 00:14:21.986 SGL Bit Bucket Descriptor: Not Supported 00:14:21.986 SGL Metadata Pointer: Not Supported 00:14:21.986 Oversized SGL: Not Supported 00:14:21.986 SGL Metadata Address: Not Supported 00:14:21.986 SGL Offset: Supported 00:14:21.986 Transport SGL Data Block: Not Supported 00:14:21.986 Replay Protected Memory Block: Not Supported 00:14:21.986 00:14:21.986 Firmware Slot Information 00:14:21.986 ========================= 00:14:21.986 Active slot: 0 00:14:21.986 00:14:21.986 00:14:21.986 Error Log 00:14:21.986 ========= 00:14:21.986 00:14:21.986 Active Namespaces 00:14:21.986 ================= 00:14:21.986 Discovery Log Page 00:14:21.986 ================== 00:14:21.986 Generation Counter: 2 00:14:21.986 Number of Records: 2 00:14:21.986 Record Format: 0 00:14:21.986 00:14:21.986 Discovery Log Entry 0 00:14:21.986 ---------------------- 00:14:21.986 Transport Type: 3 (TCP) 00:14:21.986 Address Family: 1 (IPv4) 00:14:21.986 Subsystem Type: 3 (Current Discovery Subsystem) 00:14:21.986 Entry Flags: 00:14:21.986 Duplicate Returned Information: 1 00:14:21.986 Explicit Persistent Connection Support for Discovery: 1 00:14:21.986 Transport Requirements: 00:14:21.986 Secure Channel: Not Required 00:14:21.986 Port ID: 0 (0x0000) 00:14:21.986 Controller ID: 65535 (0xffff) 00:14:21.986 Admin Max SQ Size: 128 00:14:21.986 Transport Service Identifier: 4420 00:14:21.986 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:14:21.986 Transport Address: 10.0.0.2 00:14:21.986 Discovery Log Entry 1 00:14:21.986 ---------------------- 00:14:21.986 Transport Type: 3 (TCP) 00:14:21.986 Address Family: 1 (IPv4) 00:14:21.986 Subsystem Type: 2 (NVM Subsystem) 00:14:21.986 Entry Flags: 00:14:21.986 Duplicate Returned Information: 0 00:14:21.986 Explicit Persistent Connection Support for Discovery: 0 00:14:21.986 Transport Requirements: 00:14:21.986 Secure Channel: Not Required 00:14:21.986 Port ID: 0 (0x0000) 00:14:21.986 Controller ID: 65535 (0xffff) 00:14:21.986 Admin Max SQ Size: 128 00:14:21.986 Transport Service Identifier: 4420 00:14:21.986 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:14:21.986 Transport Address: 10.0.0.2 [2024-07-25 10:52:51.699542] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:14:21.986 [2024-07-25 10:52:51.699556] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1940) on tqpair=0x13802c0 00:14:21.986 [2024-07-25 10:52:51.699563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:21.986 [2024-07-25 10:52:51.699569] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1ac0) on tqpair=0x13802c0 00:14:21.986 [2024-07-25 10:52:51.699574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:21.986 [2024-07-25 10:52:51.699580] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1c40) on tqpair=0x13802c0 00:14:21.986 [2024-07-25 10:52:51.699585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:21.986 [2024-07-25 10:52:51.699590] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1dc0) on tqpair=0x13802c0 00:14:21.986 [2024-07-25 10:52:51.699595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:21.986 [2024-07-25 10:52:51.699604] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.986 [2024-07-25 10:52:51.699608] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.986 [2024-07-25 10:52:51.699612] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13802c0) 00:14:21.986 [2024-07-25 10:52:51.699620] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.986 [2024-07-25 10:52:51.699641] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1dc0, cid 3, qid 0 00:14:21.986 [2024-07-25 10:52:51.699691] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.986 [2024-07-25 10:52:51.699698] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.986 [2024-07-25 10:52:51.699702] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.986 [2024-07-25 10:52:51.699706] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1dc0) on tqpair=0x13802c0 00:14:21.986 [2024-07-25 10:52:51.699718] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.986 [2024-07-25 10:52:51.699723] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.986 [2024-07-25 10:52:51.699727] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13802c0) 00:14:21.986 [2024-07-25 10:52:51.699734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.986 [2024-07-25 10:52:51.699756] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1dc0, cid 3, qid 0 00:14:21.986 [2024-07-25 10:52:51.699816] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.986 [2024-07-25 10:52:51.699823] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.986 [2024-07-25 10:52:51.699826] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.986 [2024-07-25 10:52:51.699830] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1dc0) on tqpair=0x13802c0 00:14:21.986 [2024-07-25 10:52:51.699836] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:14:21.986 [2024-07-25 10:52:51.699841] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:14:21.986 [2024-07-25 10:52:51.699863] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.986 [2024-07-25 10:52:51.699869] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.986 [2024-07-25 10:52:51.699873] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13802c0) 00:14:21.986 [2024-07-25 10:52:51.699881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.986 [2024-07-25 10:52:51.699900] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1dc0, cid 3, qid 0 00:14:21.986 [2024-07-25 10:52:51.699947] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.986 [2024-07-25 10:52:51.699954] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.986 [2024-07-25 10:52:51.699958] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.986 [2024-07-25 10:52:51.699962] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1dc0) on tqpair=0x13802c0 00:14:21.986 [2024-07-25 10:52:51.699973] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.986 [2024-07-25 10:52:51.699978] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.987 [2024-07-25 10:52:51.699981] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13802c0) 00:14:21.987 [2024-07-25 10:52:51.699989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.987 [2024-07-25 10:52:51.700006] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1dc0, cid 3, qid 0 00:14:21.987 [2024-07-25 10:52:51.700054] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.987 [2024-07-25 10:52:51.700061] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.987 [2024-07-25 10:52:51.700065] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.987 [2024-07-25 10:52:51.700069] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1dc0) on tqpair=0x13802c0 00:14:21.987 [2024-07-25 10:52:51.700080] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.987 [2024-07-25 10:52:51.700084] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.987 [2024-07-25 10:52:51.700088] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13802c0) 00:14:21.987 [2024-07-25 10:52:51.700095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.987 [2024-07-25 10:52:51.700112] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1dc0, cid 3, qid 0 00:14:21.987 [2024-07-25 10:52:51.700157] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.987 [2024-07-25 10:52:51.700164] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.987 [2024-07-25 10:52:51.700167] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.987 [2024-07-25 10:52:51.700171] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1dc0) on tqpair=0x13802c0 00:14:21.987 [2024-07-25 10:52:51.700182] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.987 [2024-07-25 10:52:51.700186] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.987 [2024-07-25 10:52:51.700190] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13802c0) 00:14:21.987 [2024-07-25 10:52:51.700197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.987 [2024-07-25 10:52:51.700213] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1dc0, cid 3, qid 0 00:14:21.987 [2024-07-25 10:52:51.700264] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.987 [2024-07-25 10:52:51.700276] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.987 [2024-07-25 10:52:51.700281] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.987 [2024-07-25 10:52:51.700285] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1dc0) on tqpair=0x13802c0 00:14:21.987 [2024-07-25 10:52:51.700296] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.987 [2024-07-25 10:52:51.700301] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.987 [2024-07-25 10:52:51.700305] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13802c0) 00:14:21.987 [2024-07-25 10:52:51.700312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.987 [2024-07-25 10:52:51.700329] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1dc0, cid 3, qid 0 00:14:21.987 [2024-07-25 10:52:51.700377] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.987 [2024-07-25 10:52:51.700384] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.987 [2024-07-25 10:52:51.700388] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.987 [2024-07-25 10:52:51.700392] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1dc0) on tqpair=0x13802c0 00:14:21.987 [2024-07-25 10:52:51.700403] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.987 [2024-07-25 10:52:51.700407] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.987 [2024-07-25 10:52:51.700411] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13802c0) 00:14:21.987 [2024-07-25 10:52:51.700418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.987 [2024-07-25 10:52:51.700435] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1dc0, cid 3, qid 0 00:14:21.987 [2024-07-25 10:52:51.700486] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.987 [2024-07-25 10:52:51.700493] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.987 [2024-07-25 10:52:51.700497] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.987 [2024-07-25 10:52:51.700501] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1dc0) on tqpair=0x13802c0 00:14:21.987 [2024-07-25 10:52:51.700511] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.987 [2024-07-25 10:52:51.700516] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.987 [2024-07-25 10:52:51.700520] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13802c0) 00:14:21.987 [2024-07-25 10:52:51.700527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.987 [2024-07-25 10:52:51.700543] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1dc0, cid 3, qid 0 00:14:21.987 [2024-07-25 10:52:51.700588] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.987 [2024-07-25 10:52:51.700594] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.987 [2024-07-25 10:52:51.700598] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.987 [2024-07-25 10:52:51.700602] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1dc0) on tqpair=0x13802c0 00:14:21.987 [2024-07-25 10:52:51.700613] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.987 [2024-07-25 10:52:51.700617] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.987 [2024-07-25 10:52:51.700621] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13802c0) 00:14:21.987 [2024-07-25 10:52:51.700628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.987 [2024-07-25 10:52:51.700644] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1dc0, cid 3, qid 0 00:14:21.987 [2024-07-25 10:52:51.700689] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.987 [2024-07-25 10:52:51.700696] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.987 [2024-07-25 10:52:51.700700] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.987 [2024-07-25 10:52:51.700704] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1dc0) on tqpair=0x13802c0 00:14:21.987 [2024-07-25 10:52:51.700714] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.987 [2024-07-25 10:52:51.700719] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.987 [2024-07-25 10:52:51.700723] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13802c0) 00:14:21.987 [2024-07-25 10:52:51.700730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.987 [2024-07-25 10:52:51.700746] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1dc0, cid 3, qid 0 00:14:21.987 [2024-07-25 10:52:51.700792] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.987 [2024-07-25 10:52:51.700799] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.987 [2024-07-25 10:52:51.700802] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.987 [2024-07-25 10:52:51.700806] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1dc0) on tqpair=0x13802c0 00:14:21.987 [2024-07-25 10:52:51.700817] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.988 [2024-07-25 10:52:51.700822] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.988 [2024-07-25 10:52:51.700825] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13802c0) 00:14:21.988 [2024-07-25 10:52:51.700833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.988 [2024-07-25 10:52:51.700849] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1dc0, cid 3, qid 0 00:14:21.988 [2024-07-25 10:52:51.704895] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.988 [2024-07-25 10:52:51.704904] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.988 [2024-07-25 10:52:51.704908] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.988 [2024-07-25 10:52:51.704912] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1dc0) on tqpair=0x13802c0 00:14:21.988 [2024-07-25 10:52:51.704926] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.988 [2024-07-25 10:52:51.704931] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.988 [2024-07-25 10:52:51.704935] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13802c0) 00:14:21.988 [2024-07-25 10:52:51.704944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.988 [2024-07-25 10:52:51.704969] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1dc0, cid 3, qid 0 00:14:21.988 [2024-07-25 10:52:51.705023] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.988 [2024-07-25 10:52:51.705030] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.988 [2024-07-25 10:52:51.705033] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.988 [2024-07-25 10:52:51.705038] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1dc0) on tqpair=0x13802c0 00:14:21.988 [2024-07-25 10:52:51.705047] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:14:22.269 00:14:22.269 10:52:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:14:22.269 [2024-07-25 10:52:51.745837] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:22.269 [2024-07-25 10:52:51.745898] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74216 ] 00:14:22.269 [2024-07-25 10:52:51.886615] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:14:22.269 [2024-07-25 10:52:51.886697] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:22.269 [2024-07-25 10:52:51.886704] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:22.269 [2024-07-25 10:52:51.886719] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:22.269 [2024-07-25 10:52:51.886730] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:22.269 [2024-07-25 10:52:51.886906] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:14:22.269 [2024-07-25 10:52:51.886960] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1b272c0 0 00:14:22.269 [2024-07-25 10:52:51.893876] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:22.269 [2024-07-25 10:52:51.893900] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:22.269 [2024-07-25 10:52:51.893906] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:22.269 [2024-07-25 10:52:51.893910] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:22.269 [2024-07-25 10:52:51.893957] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.270 [2024-07-25 10:52:51.893964] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.270 [2024-07-25 10:52:51.893969] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b272c0) 00:14:22.270 [2024-07-25 10:52:51.893996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:22.270 [2024-07-25 10:52:51.894037] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68940, cid 0, qid 0 00:14:22.270 [2024-07-25 10:52:51.901870] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.270 [2024-07-25 10:52:51.901892] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.270 [2024-07-25 10:52:51.901897] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.270 [2024-07-25 10:52:51.901903] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68940) on tqpair=0x1b272c0 00:14:22.270 [2024-07-25 10:52:51.901914] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:22.270 [2024-07-25 10:52:51.901922] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:14:22.270 [2024-07-25 10:52:51.901929] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:14:22.270 [2024-07-25 10:52:51.901949] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.270 [2024-07-25 10:52:51.901954] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.270 [2024-07-25 10:52:51.901958] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b272c0) 00:14:22.270 [2024-07-25 10:52:51.901968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.270 [2024-07-25 10:52:51.902012] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68940, cid 0, qid 0 00:14:22.270 [2024-07-25 10:52:51.902072] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.270 [2024-07-25 10:52:51.902079] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.270 [2024-07-25 10:52:51.902083] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.270 [2024-07-25 10:52:51.902087] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68940) on tqpair=0x1b272c0 00:14:22.270 [2024-07-25 10:52:51.902093] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:14:22.270 [2024-07-25 10:52:51.902101] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:14:22.270 [2024-07-25 10:52:51.902110] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.270 [2024-07-25 10:52:51.902114] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.270 [2024-07-25 10:52:51.902118] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b272c0) 00:14:22.270 [2024-07-25 10:52:51.902126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.270 [2024-07-25 10:52:51.902145] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68940, cid 0, qid 0 00:14:22.270 [2024-07-25 10:52:51.902197] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.270 [2024-07-25 10:52:51.902204] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.270 [2024-07-25 10:52:51.902208] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.270 [2024-07-25 10:52:51.902212] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68940) on tqpair=0x1b272c0 00:14:22.270 [2024-07-25 10:52:51.902218] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:14:22.270 [2024-07-25 10:52:51.902227] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:14:22.270 [2024-07-25 10:52:51.902234] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.270 [2024-07-25 10:52:51.902239] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.270 [2024-07-25 10:52:51.902243] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b272c0) 00:14:22.270 [2024-07-25 10:52:51.902250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.270 [2024-07-25 10:52:51.902268] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68940, cid 0, qid 0 00:14:22.270 [2024-07-25 10:52:51.902316] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.270 [2024-07-25 10:52:51.902330] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.270 [2024-07-25 10:52:51.902335] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.270 [2024-07-25 10:52:51.902339] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68940) on tqpair=0x1b272c0 00:14:22.270 [2024-07-25 10:52:51.902345] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:22.270 [2024-07-25 10:52:51.902357] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.270 [2024-07-25 10:52:51.902361] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.270 [2024-07-25 10:52:51.902365] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b272c0) 00:14:22.270 [2024-07-25 10:52:51.902373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.270 [2024-07-25 10:52:51.902391] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68940, cid 0, qid 0 00:14:22.270 [2024-07-25 10:52:51.902437] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.270 [2024-07-25 10:52:51.902450] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.270 [2024-07-25 10:52:51.902455] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.270 [2024-07-25 10:52:51.902459] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68940) on tqpair=0x1b272c0 00:14:22.270 [2024-07-25 10:52:51.902464] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:14:22.270 [2024-07-25 10:52:51.902470] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:14:22.270 [2024-07-25 10:52:51.902478] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:22.270 [2024-07-25 10:52:51.902584] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:14:22.270 [2024-07-25 10:52:51.902595] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:22.270 [2024-07-25 10:52:51.902605] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.270 [2024-07-25 10:52:51.902610] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.270 [2024-07-25 10:52:51.902614] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b272c0) 00:14:22.270 [2024-07-25 10:52:51.902621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.270 [2024-07-25 10:52:51.902641] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68940, cid 0, qid 0 00:14:22.270 [2024-07-25 10:52:51.902697] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.270 [2024-07-25 10:52:51.902708] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.270 [2024-07-25 10:52:51.902712] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.270 [2024-07-25 10:52:51.902717] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68940) on tqpair=0x1b272c0 00:14:22.270 [2024-07-25 10:52:51.902722] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:22.270 [2024-07-25 10:52:51.902733] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.270 [2024-07-25 10:52:51.902738] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.270 [2024-07-25 10:52:51.902742] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b272c0) 00:14:22.270 [2024-07-25 10:52:51.902749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.270 [2024-07-25 10:52:51.902767] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68940, cid 0, qid 0 00:14:22.270 [2024-07-25 10:52:51.902815] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.270 [2024-07-25 10:52:51.902821] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.270 [2024-07-25 10:52:51.902825] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.270 [2024-07-25 10:52:51.902829] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68940) on tqpair=0x1b272c0 00:14:22.270 [2024-07-25 10:52:51.902834] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:22.270 [2024-07-25 10:52:51.902840] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:14:22.270 [2024-07-25 10:52:51.902848] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:14:22.270 [2024-07-25 10:52:51.902871] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:14:22.270 [2024-07-25 10:52:51.902883] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.270 [2024-07-25 10:52:51.902888] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b272c0) 00:14:22.270 [2024-07-25 10:52:51.902896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.270 [2024-07-25 10:52:51.902916] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68940, cid 0, qid 0 00:14:22.270 [2024-07-25 10:52:51.903017] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:22.270 [2024-07-25 10:52:51.903029] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:22.270 [2024-07-25 10:52:51.903033] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:22.270 [2024-07-25 10:52:51.903038] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b272c0): datao=0, datal=4096, cccid=0 00:14:22.270 [2024-07-25 10:52:51.903043] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b68940) on tqpair(0x1b272c0): expected_datao=0, payload_size=4096 00:14:22.270 [2024-07-25 10:52:51.903050] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.270 [2024-07-25 10:52:51.903059] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:22.270 [2024-07-25 10:52:51.903063] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:22.270 [2024-07-25 10:52:51.903073] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.270 [2024-07-25 10:52:51.903079] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.270 [2024-07-25 10:52:51.903083] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.270 [2024-07-25 10:52:51.903087] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68940) on tqpair=0x1b272c0 00:14:22.271 [2024-07-25 10:52:51.903096] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:14:22.271 [2024-07-25 10:52:51.903101] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:14:22.271 [2024-07-25 10:52:51.903106] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:14:22.271 [2024-07-25 10:52:51.903115] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:14:22.271 [2024-07-25 10:52:51.903121] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:14:22.271 [2024-07-25 10:52:51.903126] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:14:22.271 [2024-07-25 10:52:51.903136] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:14:22.271 [2024-07-25 10:52:51.903145] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.271 [2024-07-25 10:52:51.903149] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.271 [2024-07-25 10:52:51.903153] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b272c0) 00:14:22.271 [2024-07-25 10:52:51.903161] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:22.271 [2024-07-25 10:52:51.903181] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68940, cid 0, qid 0 00:14:22.271 [2024-07-25 10:52:51.903236] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.271 [2024-07-25 10:52:51.903243] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.271 [2024-07-25 10:52:51.903247] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.271 [2024-07-25 10:52:51.903251] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68940) on tqpair=0x1b272c0 00:14:22.271 [2024-07-25 10:52:51.903259] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.271 [2024-07-25 10:52:51.903264] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.271 [2024-07-25 10:52:51.903268] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b272c0) 00:14:22.271 [2024-07-25 10:52:51.903275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:22.271 [2024-07-25 10:52:51.903281] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.271 [2024-07-25 10:52:51.903285] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.271 [2024-07-25 10:52:51.903289] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1b272c0) 00:14:22.271 [2024-07-25 10:52:51.903295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:22.271 [2024-07-25 10:52:51.903302] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.271 [2024-07-25 10:52:51.903306] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.271 [2024-07-25 10:52:51.903310] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1b272c0) 00:14:22.271 [2024-07-25 10:52:51.903316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:22.271 [2024-07-25 10:52:51.903322] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.271 [2024-07-25 10:52:51.903326] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.271 [2024-07-25 10:52:51.903330] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.271 [2024-07-25 10:52:51.903336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:22.271 [2024-07-25 10:52:51.903341] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:22.271 [2024-07-25 10:52:51.903350] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:22.271 [2024-07-25 10:52:51.903357] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.271 [2024-07-25 10:52:51.903361] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b272c0) 00:14:22.271 [2024-07-25 10:52:51.903368] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.271 [2024-07-25 10:52:51.903392] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68940, cid 0, qid 0 00:14:22.271 [2024-07-25 10:52:51.903400] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68ac0, cid 1, qid 0 00:14:22.271 [2024-07-25 10:52:51.903405] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68c40, cid 2, qid 0 00:14:22.271 [2024-07-25 10:52:51.903410] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.271 [2024-07-25 10:52:51.903414] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68f40, cid 4, qid 0 00:14:22.271 [2024-07-25 10:52:51.903519] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.271 [2024-07-25 10:52:51.903534] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.271 [2024-07-25 10:52:51.903538] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.271 [2024-07-25 10:52:51.903543] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68f40) on tqpair=0x1b272c0 00:14:22.271 [2024-07-25 10:52:51.903549] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:14:22.271 [2024-07-25 10:52:51.903554] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:22.271 [2024-07-25 10:52:51.903563] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:14:22.271 [2024-07-25 10:52:51.903571] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:22.271 [2024-07-25 10:52:51.903578] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.271 [2024-07-25 10:52:51.903582] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.271 [2024-07-25 10:52:51.903586] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b272c0) 00:14:22.271 [2024-07-25 10:52:51.903594] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:22.271 [2024-07-25 10:52:51.903613] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68f40, cid 4, qid 0 00:14:22.271 [2024-07-25 10:52:51.903661] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.271 [2024-07-25 10:52:51.903668] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.271 [2024-07-25 10:52:51.903672] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.271 [2024-07-25 10:52:51.903676] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68f40) on tqpair=0x1b272c0 00:14:22.271 [2024-07-25 10:52:51.903742] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:14:22.271 [2024-07-25 10:52:51.903759] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:22.271 [2024-07-25 10:52:51.903768] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.271 [2024-07-25 10:52:51.903773] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b272c0) 00:14:22.271 [2024-07-25 10:52:51.903780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.271 [2024-07-25 10:52:51.903800] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68f40, cid 4, qid 0 00:14:22.271 [2024-07-25 10:52:51.903873] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:22.271 [2024-07-25 10:52:51.903881] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:22.271 [2024-07-25 10:52:51.903885] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:22.271 [2024-07-25 10:52:51.903889] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b272c0): datao=0, datal=4096, cccid=4 00:14:22.271 [2024-07-25 10:52:51.903894] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b68f40) on tqpair(0x1b272c0): expected_datao=0, payload_size=4096 00:14:22.271 [2024-07-25 10:52:51.903899] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.271 [2024-07-25 10:52:51.903907] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:22.271 [2024-07-25 10:52:51.903911] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:22.271 [2024-07-25 10:52:51.903919] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.271 [2024-07-25 10:52:51.903926] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.271 [2024-07-25 10:52:51.903929] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.271 [2024-07-25 10:52:51.903933] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68f40) on tqpair=0x1b272c0 00:14:22.271 [2024-07-25 10:52:51.903945] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:14:22.271 [2024-07-25 10:52:51.903958] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:14:22.271 [2024-07-25 10:52:51.903969] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:14:22.271 [2024-07-25 10:52:51.903977] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.271 [2024-07-25 10:52:51.903982] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b272c0) 00:14:22.271 [2024-07-25 10:52:51.903989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.271 [2024-07-25 10:52:51.904010] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68f40, cid 4, qid 0 00:14:22.271 [2024-07-25 10:52:51.904090] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:22.271 [2024-07-25 10:52:51.904101] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:22.271 [2024-07-25 10:52:51.904105] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:22.271 [2024-07-25 10:52:51.904109] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b272c0): datao=0, datal=4096, cccid=4 00:14:22.271 [2024-07-25 10:52:51.904115] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b68f40) on tqpair(0x1b272c0): expected_datao=0, payload_size=4096 00:14:22.271 [2024-07-25 10:52:51.904120] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.271 [2024-07-25 10:52:51.904127] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:22.271 [2024-07-25 10:52:51.904131] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:22.271 [2024-07-25 10:52:51.904140] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.271 [2024-07-25 10:52:51.904146] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.271 [2024-07-25 10:52:51.904150] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.271 [2024-07-25 10:52:51.904154] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68f40) on tqpair=0x1b272c0 00:14:22.272 [2024-07-25 10:52:51.904170] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:22.272 [2024-07-25 10:52:51.904181] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:22.272 [2024-07-25 10:52:51.904190] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.272 [2024-07-25 10:52:51.904195] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b272c0) 00:14:22.272 [2024-07-25 10:52:51.904203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.272 [2024-07-25 10:52:51.904223] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68f40, cid 4, qid 0 00:14:22.272 [2024-07-25 10:52:51.904282] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:22.272 [2024-07-25 10:52:51.904288] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:22.272 [2024-07-25 10:52:51.904292] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:22.272 [2024-07-25 10:52:51.904296] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b272c0): datao=0, datal=4096, cccid=4 00:14:22.272 [2024-07-25 10:52:51.904301] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b68f40) on tqpair(0x1b272c0): expected_datao=0, payload_size=4096 00:14:22.272 [2024-07-25 10:52:51.904306] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.272 [2024-07-25 10:52:51.904313] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:22.272 [2024-07-25 10:52:51.904317] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:22.272 [2024-07-25 10:52:51.904325] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.272 [2024-07-25 10:52:51.904331] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.272 [2024-07-25 10:52:51.904335] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.272 [2024-07-25 10:52:51.904339] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68f40) on tqpair=0x1b272c0 00:14:22.272 [2024-07-25 10:52:51.904348] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:22.272 [2024-07-25 10:52:51.904357] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:14:22.272 [2024-07-25 10:52:51.904368] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:14:22.272 [2024-07-25 10:52:51.904374] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:22.272 [2024-07-25 10:52:51.904380] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:22.272 [2024-07-25 10:52:51.904385] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:14:22.272 [2024-07-25 10:52:51.904391] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:14:22.272 [2024-07-25 10:52:51.904396] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:14:22.272 [2024-07-25 10:52:51.904402] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:14:22.272 [2024-07-25 10:52:51.904420] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.272 [2024-07-25 10:52:51.904425] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b272c0) 00:14:22.272 [2024-07-25 10:52:51.904432] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.272 [2024-07-25 10:52:51.904440] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.272 [2024-07-25 10:52:51.904444] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.272 [2024-07-25 10:52:51.904448] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b272c0) 00:14:22.272 [2024-07-25 10:52:51.904454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:22.272 [2024-07-25 10:52:51.904478] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68f40, cid 4, qid 0 00:14:22.272 [2024-07-25 10:52:51.904485] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b690c0, cid 5, qid 0 00:14:22.272 [2024-07-25 10:52:51.904575] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.272 [2024-07-25 10:52:51.904587] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.272 [2024-07-25 10:52:51.904591] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.272 [2024-07-25 10:52:51.904596] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68f40) on tqpair=0x1b272c0 00:14:22.272 [2024-07-25 10:52:51.904603] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.272 [2024-07-25 10:52:51.904609] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.272 [2024-07-25 10:52:51.904613] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.272 [2024-07-25 10:52:51.904617] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b690c0) on tqpair=0x1b272c0 00:14:22.272 [2024-07-25 10:52:51.904628] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.272 [2024-07-25 10:52:51.904633] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b272c0) 00:14:22.272 [2024-07-25 10:52:51.904640] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.272 [2024-07-25 10:52:51.904658] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b690c0, cid 5, qid 0 00:14:22.272 [2024-07-25 10:52:51.904704] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.272 [2024-07-25 10:52:51.904715] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.272 [2024-07-25 10:52:51.904720] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.272 [2024-07-25 10:52:51.904724] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b690c0) on tqpair=0x1b272c0 00:14:22.272 [2024-07-25 10:52:51.904735] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.272 [2024-07-25 10:52:51.904740] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b272c0) 00:14:22.272 [2024-07-25 10:52:51.904747] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.272 [2024-07-25 10:52:51.904764] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b690c0, cid 5, qid 0 00:14:22.272 [2024-07-25 10:52:51.904816] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.272 [2024-07-25 10:52:51.904824] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.272 [2024-07-25 10:52:51.904828] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.272 [2024-07-25 10:52:51.904832] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b690c0) on tqpair=0x1b272c0 00:14:22.272 [2024-07-25 10:52:51.904843] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.272 [2024-07-25 10:52:51.904848] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b272c0) 00:14:22.272 [2024-07-25 10:52:51.904869] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.272 [2024-07-25 10:52:51.904889] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b690c0, cid 5, qid 0 00:14:22.272 [2024-07-25 10:52:51.904940] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.272 [2024-07-25 10:52:51.904947] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.272 [2024-07-25 10:52:51.904950] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.272 [2024-07-25 10:52:51.904955] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b690c0) on tqpair=0x1b272c0 00:14:22.272 [2024-07-25 10:52:51.904974] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.272 [2024-07-25 10:52:51.904980] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b272c0) 00:14:22.272 [2024-07-25 10:52:51.904987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.272 [2024-07-25 10:52:51.904995] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.272 [2024-07-25 10:52:51.904999] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b272c0) 00:14:22.272 [2024-07-25 10:52:51.905006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.272 [2024-07-25 10:52:51.905013] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.272 [2024-07-25 10:52:51.905018] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1b272c0) 00:14:22.272 [2024-07-25 10:52:51.905024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.272 [2024-07-25 10:52:51.905032] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.272 [2024-07-25 10:52:51.905036] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1b272c0) 00:14:22.272 [2024-07-25 10:52:51.905043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.272 [2024-07-25 10:52:51.905063] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b690c0, cid 5, qid 0 00:14:22.272 [2024-07-25 10:52:51.905069] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68f40, cid 4, qid 0 00:14:22.272 [2024-07-25 10:52:51.905075] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b69240, cid 6, qid 0 00:14:22.272 [2024-07-25 10:52:51.905079] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b693c0, cid 7, qid 0 00:14:22.272 [2024-07-25 10:52:51.905219] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:22.272 [2024-07-25 10:52:51.905231] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:22.272 [2024-07-25 10:52:51.905235] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:22.272 [2024-07-25 10:52:51.905239] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b272c0): datao=0, datal=8192, cccid=5 00:14:22.272 [2024-07-25 10:52:51.905244] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b690c0) on tqpair(0x1b272c0): expected_datao=0, payload_size=8192 00:14:22.272 [2024-07-25 10:52:51.905249] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.272 [2024-07-25 10:52:51.905267] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:22.272 [2024-07-25 10:52:51.905271] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:22.272 [2024-07-25 10:52:51.905278] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:22.272 [2024-07-25 10:52:51.905284] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:22.272 [2024-07-25 10:52:51.905287] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:22.272 [2024-07-25 10:52:51.905291] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b272c0): datao=0, datal=512, cccid=4 00:14:22.273 [2024-07-25 10:52:51.905296] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b68f40) on tqpair(0x1b272c0): expected_datao=0, payload_size=512 00:14:22.273 [2024-07-25 10:52:51.905301] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.273 [2024-07-25 10:52:51.905307] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:22.273 [2024-07-25 10:52:51.905311] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:22.273 [2024-07-25 10:52:51.905317] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:22.273 [2024-07-25 10:52:51.905322] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:22.273 [2024-07-25 10:52:51.905326] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:22.273 [2024-07-25 10:52:51.905330] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b272c0): datao=0, datal=512, cccid=6 00:14:22.273 [2024-07-25 10:52:51.905334] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b69240) on tqpair(0x1b272c0): expected_datao=0, payload_size=512 00:14:22.273 [2024-07-25 10:52:51.905339] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.273 [2024-07-25 10:52:51.905345] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:22.273 [2024-07-25 10:52:51.905349] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:22.273 [2024-07-25 10:52:51.905355] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:22.273 [2024-07-25 10:52:51.905360] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:22.273 [2024-07-25 10:52:51.905364] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:22.273 [2024-07-25 10:52:51.905368] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b272c0): datao=0, datal=4096, cccid=7 00:14:22.273 [2024-07-25 10:52:51.905372] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b693c0) on tqpair(0x1b272c0): expected_datao=0, payload_size=4096 00:14:22.273 [2024-07-25 10:52:51.905377] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.273 [2024-07-25 10:52:51.905384] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:22.273 [2024-07-25 10:52:51.905387] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:22.273 [2024-07-25 10:52:51.905395] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.273 [2024-07-25 10:52:51.905401] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.273 [2024-07-25 10:52:51.905405] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.273 [2024-07-25 10:52:51.905409] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b690c0) on tqpair=0x1b272c0 00:14:22.273 [2024-07-25 10:52:51.905427] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.273 ===================================================== 00:14:22.273 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:22.273 ===================================================== 00:14:22.273 Controller Capabilities/Features 00:14:22.273 ================================ 00:14:22.273 Vendor ID: 8086 00:14:22.273 Subsystem Vendor ID: 8086 00:14:22.273 Serial Number: SPDK00000000000001 00:14:22.273 Model Number: SPDK bdev Controller 00:14:22.273 Firmware Version: 24.09 00:14:22.273 Recommended Arb Burst: 6 00:14:22.273 IEEE OUI Identifier: e4 d2 5c 00:14:22.273 Multi-path I/O 00:14:22.273 May have multiple subsystem ports: Yes 00:14:22.273 May have multiple controllers: Yes 00:14:22.273 Associated with SR-IOV VF: No 00:14:22.273 Max Data Transfer Size: 131072 00:14:22.273 Max Number of Namespaces: 32 00:14:22.273 Max Number of I/O Queues: 127 00:14:22.273 NVMe Specification Version (VS): 1.3 00:14:22.273 NVMe Specification Version (Identify): 1.3 00:14:22.273 Maximum Queue Entries: 128 00:14:22.273 Contiguous Queues Required: Yes 00:14:22.273 Arbitration Mechanisms Supported 00:14:22.273 Weighted Round Robin: Not Supported 00:14:22.273 Vendor Specific: Not Supported 00:14:22.273 Reset Timeout: 15000 ms 00:14:22.273 Doorbell Stride: 4 bytes 00:14:22.273 NVM Subsystem Reset: Not Supported 00:14:22.273 Command Sets Supported 00:14:22.273 NVM Command Set: Supported 00:14:22.273 Boot Partition: Not Supported 00:14:22.273 Memory Page Size Minimum: 4096 bytes 00:14:22.273 Memory Page Size Maximum: 4096 bytes 00:14:22.273 Persistent Memory Region: Not Supported 00:14:22.273 Optional Asynchronous Events Supported 00:14:22.273 Namespace Attribute Notices: Supported 00:14:22.273 Firmware Activation Notices: Not Supported 00:14:22.273 ANA Change Notices: Not Supported 00:14:22.273 PLE Aggregate Log Change Notices: Not Supported 00:14:22.273 LBA Status Info Alert Notices: Not Supported 00:14:22.273 EGE Aggregate Log Change Notices: Not Supported 00:14:22.273 Normal NVM Subsystem Shutdown event: Not Supported 00:14:22.273 Zone Descriptor Change Notices: Not Supported 00:14:22.273 Discovery Log Change Notices: Not Supported 00:14:22.273 Controller Attributes 00:14:22.273 128-bit Host Identifier: Supported 00:14:22.273 Non-Operational Permissive Mode: Not Supported 00:14:22.273 NVM Sets: Not Supported 00:14:22.273 Read Recovery Levels: Not Supported 00:14:22.273 Endurance Groups: Not Supported 00:14:22.273 Predictable Latency Mode: Not Supported 00:14:22.273 Traffic Based Keep ALive: Not Supported 00:14:22.273 Namespace Granularity: Not Supported 00:14:22.273 SQ Associations: Not Supported 00:14:22.273 UUID List: Not Supported 00:14:22.273 Multi-Domain Subsystem: Not Supported 00:14:22.273 Fixed Capacity Management: Not Supported 00:14:22.273 Variable Capacity Management: Not Supported 00:14:22.273 Delete Endurance Group: Not Supported 00:14:22.273 Delete NVM Set: Not Supported 00:14:22.273 Extended LBA Formats Supported: Not Supported 00:14:22.273 Flexible Data Placement Supported: Not Supported 00:14:22.273 00:14:22.273 Controller Memory Buffer Support 00:14:22.273 ================================ 00:14:22.273 Supported: No 00:14:22.273 00:14:22.273 Persistent Memory Region Support 00:14:22.273 ================================ 00:14:22.273 Supported: No 00:14:22.273 00:14:22.273 Admin Command Set Attributes 00:14:22.273 ============================ 00:14:22.273 Security Send/Receive: Not Supported 00:14:22.273 Format NVM: Not Supported 00:14:22.273 Firmware Activate/Download: Not Supported 00:14:22.273 Namespace Management: Not Supported 00:14:22.273 Device Self-Test: Not Supported 00:14:22.273 Directives: Not Supported 00:14:22.273 NVMe-MI: Not Supported 00:14:22.273 Virtualization Management: Not Supported 00:14:22.273 Doorbell Buffer Config: Not Supported 00:14:22.273 Get LBA Status Capability: Not Supported 00:14:22.273 Command & Feature Lockdown Capability: Not Supported 00:14:22.273 Abort Command Limit: 4 00:14:22.273 Async Event Request Limit: 4 00:14:22.273 Number of Firmware Slots: N/A 00:14:22.273 Firmware Slot 1 Read-Only: N/A 00:14:22.273 Firmware Activation Without Reset: [2024-07-25 10:52:51.905434] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.273 [2024-07-25 10:52:51.905438] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.273 [2024-07-25 10:52:51.905442] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68f40) on tqpair=0x1b272c0 00:14:22.273 [2024-07-25 10:52:51.905455] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.273 [2024-07-25 10:52:51.905461] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.273 [2024-07-25 10:52:51.905465] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.273 [2024-07-25 10:52:51.905469] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b69240) on tqpair=0x1b272c0 00:14:22.273 [2024-07-25 10:52:51.905476] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.273 [2024-07-25 10:52:51.905482] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.273 [2024-07-25 10:52:51.905486] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.273 [2024-07-25 10:52:51.905490] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b693c0) on tqpair=0x1b272c0 00:14:22.273 N/A 00:14:22.273 Multiple Update Detection Support: N/A 00:14:22.273 Firmware Update Granularity: No Information Provided 00:14:22.273 Per-Namespace SMART Log: No 00:14:22.273 Asymmetric Namespace Access Log Page: Not Supported 00:14:22.273 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:14:22.273 Command Effects Log Page: Supported 00:14:22.273 Get Log Page Extended Data: Supported 00:14:22.273 Telemetry Log Pages: Not Supported 00:14:22.273 Persistent Event Log Pages: Not Supported 00:14:22.273 Supported Log Pages Log Page: May Support 00:14:22.273 Commands Supported & Effects Log Page: Not Supported 00:14:22.273 Feature Identifiers & Effects Log Page:May Support 00:14:22.273 NVMe-MI Commands & Effects Log Page: May Support 00:14:22.273 Data Area 4 for Telemetry Log: Not Supported 00:14:22.273 Error Log Page Entries Supported: 128 00:14:22.273 Keep Alive: Supported 00:14:22.273 Keep Alive Granularity: 10000 ms 00:14:22.273 00:14:22.273 NVM Command Set Attributes 00:14:22.273 ========================== 00:14:22.273 Submission Queue Entry Size 00:14:22.273 Max: 64 00:14:22.273 Min: 64 00:14:22.273 Completion Queue Entry Size 00:14:22.273 Max: 16 00:14:22.273 Min: 16 00:14:22.273 Number of Namespaces: 32 00:14:22.273 Compare Command: Supported 00:14:22.273 Write Uncorrectable Command: Not Supported 00:14:22.273 Dataset Management Command: Supported 00:14:22.273 Write Zeroes Command: Supported 00:14:22.273 Set Features Save Field: Not Supported 00:14:22.273 Reservations: Supported 00:14:22.274 Timestamp: Not Supported 00:14:22.274 Copy: Supported 00:14:22.274 Volatile Write Cache: Present 00:14:22.274 Atomic Write Unit (Normal): 1 00:14:22.274 Atomic Write Unit (PFail): 1 00:14:22.274 Atomic Compare & Write Unit: 1 00:14:22.274 Fused Compare & Write: Supported 00:14:22.274 Scatter-Gather List 00:14:22.274 SGL Command Set: Supported 00:14:22.274 SGL Keyed: Supported 00:14:22.274 SGL Bit Bucket Descriptor: Not Supported 00:14:22.274 SGL Metadata Pointer: Not Supported 00:14:22.274 Oversized SGL: Not Supported 00:14:22.274 SGL Metadata Address: Not Supported 00:14:22.274 SGL Offset: Supported 00:14:22.274 Transport SGL Data Block: Not Supported 00:14:22.274 Replay Protected Memory Block: Not Supported 00:14:22.274 00:14:22.274 Firmware Slot Information 00:14:22.274 ========================= 00:14:22.274 Active slot: 1 00:14:22.274 Slot 1 Firmware Revision: 24.09 00:14:22.274 00:14:22.274 00:14:22.274 Commands Supported and Effects 00:14:22.274 ============================== 00:14:22.274 Admin Commands 00:14:22.274 -------------- 00:14:22.274 Get Log Page (02h): Supported 00:14:22.274 Identify (06h): Supported 00:14:22.274 Abort (08h): Supported 00:14:22.274 Set Features (09h): Supported 00:14:22.274 Get Features (0Ah): Supported 00:14:22.274 Asynchronous Event Request (0Ch): Supported 00:14:22.274 Keep Alive (18h): Supported 00:14:22.274 I/O Commands 00:14:22.274 ------------ 00:14:22.274 Flush (00h): Supported LBA-Change 00:14:22.274 Write (01h): Supported LBA-Change 00:14:22.274 Read (02h): Supported 00:14:22.274 Compare (05h): Supported 00:14:22.274 Write Zeroes (08h): Supported LBA-Change 00:14:22.274 Dataset Management (09h): Supported LBA-Change 00:14:22.274 Copy (19h): Supported LBA-Change 00:14:22.274 00:14:22.274 Error Log 00:14:22.274 ========= 00:14:22.274 00:14:22.274 Arbitration 00:14:22.274 =========== 00:14:22.274 Arbitration Burst: 1 00:14:22.274 00:14:22.274 Power Management 00:14:22.274 ================ 00:14:22.274 Number of Power States: 1 00:14:22.274 Current Power State: Power State #0 00:14:22.274 Power State #0: 00:14:22.274 Max Power: 0.00 W 00:14:22.274 Non-Operational State: Operational 00:14:22.274 Entry Latency: Not Reported 00:14:22.274 Exit Latency: Not Reported 00:14:22.274 Relative Read Throughput: 0 00:14:22.274 Relative Read Latency: 0 00:14:22.274 Relative Write Throughput: 0 00:14:22.274 Relative Write Latency: 0 00:14:22.274 Idle Power: Not Reported 00:14:22.274 Active Power: Not Reported 00:14:22.274 Non-Operational Permissive Mode: Not Supported 00:14:22.274 00:14:22.274 Health Information 00:14:22.274 ================== 00:14:22.274 Critical Warnings: 00:14:22.274 Available Spare Space: OK 00:14:22.274 Temperature: OK 00:14:22.274 Device Reliability: OK 00:14:22.274 Read Only: No 00:14:22.274 Volatile Memory Backup: OK 00:14:22.274 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:22.274 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:22.274 Available Spare: 0% 00:14:22.274 Available Spare Threshold: 0% 00:14:22.274 Life Percentage Used:[2024-07-25 10:52:51.905601] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.274 [2024-07-25 10:52:51.905608] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1b272c0) 00:14:22.274 [2024-07-25 10:52:51.905616] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.274 [2024-07-25 10:52:51.905649] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b693c0, cid 7, qid 0 00:14:22.274 [2024-07-25 10:52:51.905709] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.274 [2024-07-25 10:52:51.905717] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.274 [2024-07-25 10:52:51.905720] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.274 [2024-07-25 10:52:51.905725] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b693c0) on tqpair=0x1b272c0 00:14:22.274 [2024-07-25 10:52:51.905787] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:14:22.274 [2024-07-25 10:52:51.905803] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68940) on tqpair=0x1b272c0 00:14:22.274 [2024-07-25 10:52:51.905811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.274 [2024-07-25 10:52:51.905816] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68ac0) on tqpair=0x1b272c0 00:14:22.274 [2024-07-25 10:52:51.905821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.274 [2024-07-25 10:52:51.905827] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68c40) on tqpair=0x1b272c0 00:14:22.274 [2024-07-25 10:52:51.905832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.274 [2024-07-25 10:52:51.905837] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.274 [2024-07-25 10:52:51.905842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.274 [2024-07-25 10:52:51.909862] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.274 [2024-07-25 10:52:51.909881] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.274 [2024-07-25 10:52:51.909886] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.274 [2024-07-25 10:52:51.909895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.274 [2024-07-25 10:52:51.909925] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.274 [2024-07-25 10:52:51.910031] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.274 [2024-07-25 10:52:51.910039] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.274 [2024-07-25 10:52:51.910043] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.274 [2024-07-25 10:52:51.910047] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.274 [2024-07-25 10:52:51.910056] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.274 [2024-07-25 10:52:51.910061] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.274 [2024-07-25 10:52:51.910065] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.274 [2024-07-25 10:52:51.910073] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.274 [2024-07-25 10:52:51.910096] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.274 [2024-07-25 10:52:51.910209] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.274 [2024-07-25 10:52:51.910216] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.274 [2024-07-25 10:52:51.910220] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.274 [2024-07-25 10:52:51.910224] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.274 [2024-07-25 10:52:51.910229] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:14:22.274 [2024-07-25 10:52:51.910235] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:14:22.274 [2024-07-25 10:52:51.910245] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.274 [2024-07-25 10:52:51.910250] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.274 [2024-07-25 10:52:51.910254] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.275 [2024-07-25 10:52:51.910262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.275 [2024-07-25 10:52:51.910279] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.275 [2024-07-25 10:52:51.910355] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.275 [2024-07-25 10:52:51.910362] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.275 [2024-07-25 10:52:51.910366] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.275 [2024-07-25 10:52:51.910370] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.275 [2024-07-25 10:52:51.910381] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.275 [2024-07-25 10:52:51.910386] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.275 [2024-07-25 10:52:51.910390] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.275 [2024-07-25 10:52:51.910397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.275 [2024-07-25 10:52:51.910414] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.275 [2024-07-25 10:52:51.910490] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.275 [2024-07-25 10:52:51.910496] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.275 [2024-07-25 10:52:51.910500] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.275 [2024-07-25 10:52:51.910504] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.275 [2024-07-25 10:52:51.910515] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.275 [2024-07-25 10:52:51.910519] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.275 [2024-07-25 10:52:51.910523] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.275 [2024-07-25 10:52:51.910531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.275 [2024-07-25 10:52:51.910547] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.275 [2024-07-25 10:52:51.910624] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.275 [2024-07-25 10:52:51.910631] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.275 [2024-07-25 10:52:51.910635] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.275 [2024-07-25 10:52:51.910639] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.275 [2024-07-25 10:52:51.910649] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.275 [2024-07-25 10:52:51.910654] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.275 [2024-07-25 10:52:51.910658] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.275 [2024-07-25 10:52:51.910665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.275 [2024-07-25 10:52:51.910682] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.275 [2024-07-25 10:52:51.910756] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.275 [2024-07-25 10:52:51.910763] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.275 [2024-07-25 10:52:51.910766] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.275 [2024-07-25 10:52:51.910770] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.275 [2024-07-25 10:52:51.910781] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.275 [2024-07-25 10:52:51.910785] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.275 [2024-07-25 10:52:51.910789] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.275 [2024-07-25 10:52:51.910797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.275 [2024-07-25 10:52:51.910813] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.275 [2024-07-25 10:52:51.910918] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.275 [2024-07-25 10:52:51.910933] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.275 [2024-07-25 10:52:51.910938] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.275 [2024-07-25 10:52:51.910942] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.275 [2024-07-25 10:52:51.910954] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.275 [2024-07-25 10:52:51.910959] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.275 [2024-07-25 10:52:51.910962] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.275 [2024-07-25 10:52:51.910970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.275 [2024-07-25 10:52:51.910990] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.275 [2024-07-25 10:52:51.911052] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.275 [2024-07-25 10:52:51.911058] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.275 [2024-07-25 10:52:51.911062] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.275 [2024-07-25 10:52:51.911066] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.275 [2024-07-25 10:52:51.911077] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.275 [2024-07-25 10:52:51.911081] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.275 [2024-07-25 10:52:51.911085] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.275 [2024-07-25 10:52:51.911093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.275 [2024-07-25 10:52:51.911110] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.275 [2024-07-25 10:52:51.911161] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.275 [2024-07-25 10:52:51.911172] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.275 [2024-07-25 10:52:51.911176] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.275 [2024-07-25 10:52:51.911180] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.275 [2024-07-25 10:52:51.911191] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.275 [2024-07-25 10:52:51.911196] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.275 [2024-07-25 10:52:51.911200] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.275 [2024-07-25 10:52:51.911213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.275 [2024-07-25 10:52:51.911230] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.275 [2024-07-25 10:52:51.911278] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.275 [2024-07-25 10:52:51.911285] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.275 [2024-07-25 10:52:51.911289] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.275 [2024-07-25 10:52:51.911293] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.275 [2024-07-25 10:52:51.911304] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.275 [2024-07-25 10:52:51.911308] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.275 [2024-07-25 10:52:51.911312] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.275 [2024-07-25 10:52:51.911320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.275 [2024-07-25 10:52:51.911336] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.275 [2024-07-25 10:52:51.911385] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.275 [2024-07-25 10:52:51.911391] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.275 [2024-07-25 10:52:51.911395] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.275 [2024-07-25 10:52:51.911399] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.275 [2024-07-25 10:52:51.911410] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.275 [2024-07-25 10:52:51.911415] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.275 [2024-07-25 10:52:51.911418] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.275 [2024-07-25 10:52:51.911426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.275 [2024-07-25 10:52:51.911443] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.275 [2024-07-25 10:52:51.911504] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.275 [2024-07-25 10:52:51.911514] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.275 [2024-07-25 10:52:51.911518] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.275 [2024-07-25 10:52:51.911523] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.275 [2024-07-25 10:52:51.911534] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.275 [2024-07-25 10:52:51.911538] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.275 [2024-07-25 10:52:51.911542] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.275 [2024-07-25 10:52:51.911550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.275 [2024-07-25 10:52:51.911567] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.275 [2024-07-25 10:52:51.911615] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.275 [2024-07-25 10:52:51.911622] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.275 [2024-07-25 10:52:51.911625] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.275 [2024-07-25 10:52:51.911630] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.275 [2024-07-25 10:52:51.911640] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.275 [2024-07-25 10:52:51.911645] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.275 [2024-07-25 10:52:51.911649] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.275 [2024-07-25 10:52:51.911656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.275 [2024-07-25 10:52:51.911673] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.275 [2024-07-25 10:52:51.911724] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.275 [2024-07-25 10:52:51.911731] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.275 [2024-07-25 10:52:51.911734] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.275 [2024-07-25 10:52:51.911743] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.276 [2024-07-25 10:52:51.911753] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.276 [2024-07-25 10:52:51.911758] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.276 [2024-07-25 10:52:51.911762] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.276 [2024-07-25 10:52:51.911769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.276 [2024-07-25 10:52:51.911785] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.276 [2024-07-25 10:52:51.911834] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.276 [2024-07-25 10:52:51.911841] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.276 [2024-07-25 10:52:51.911844] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.276 [2024-07-25 10:52:51.911849] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.276 [2024-07-25 10:52:51.911870] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.276 [2024-07-25 10:52:51.911875] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.276 [2024-07-25 10:52:51.911879] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.276 [2024-07-25 10:52:51.911887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.276 [2024-07-25 10:52:51.911905] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.276 [2024-07-25 10:52:51.911956] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.276 [2024-07-25 10:52:51.911967] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.276 [2024-07-25 10:52:51.911971] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.276 [2024-07-25 10:52:51.911976] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.276 [2024-07-25 10:52:51.911986] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.276 [2024-07-25 10:52:51.911991] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.276 [2024-07-25 10:52:51.911995] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.276 [2024-07-25 10:52:51.912003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.276 [2024-07-25 10:52:51.912020] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.276 [2024-07-25 10:52:51.912073] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.276 [2024-07-25 10:52:51.912083] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.276 [2024-07-25 10:52:51.912087] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.276 [2024-07-25 10:52:51.912092] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.276 [2024-07-25 10:52:51.912103] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.276 [2024-07-25 10:52:51.912107] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.276 [2024-07-25 10:52:51.912111] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.276 [2024-07-25 10:52:51.912119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.276 [2024-07-25 10:52:51.912136] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.276 [2024-07-25 10:52:51.912187] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.276 [2024-07-25 10:52:51.912193] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.276 [2024-07-25 10:52:51.912197] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.276 [2024-07-25 10:52:51.912201] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.276 [2024-07-25 10:52:51.912212] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.276 [2024-07-25 10:52:51.912216] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.276 [2024-07-25 10:52:51.912220] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.276 [2024-07-25 10:52:51.912228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.276 [2024-07-25 10:52:51.912244] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.276 [2024-07-25 10:52:51.912292] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.276 [2024-07-25 10:52:51.912299] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.276 [2024-07-25 10:52:51.912302] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.276 [2024-07-25 10:52:51.912306] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.276 [2024-07-25 10:52:51.912317] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.276 [2024-07-25 10:52:51.912321] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.276 [2024-07-25 10:52:51.912325] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.276 [2024-07-25 10:52:51.912333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.276 [2024-07-25 10:52:51.912349] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.276 [2024-07-25 10:52:51.912394] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.276 [2024-07-25 10:52:51.912401] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.276 [2024-07-25 10:52:51.912404] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.276 [2024-07-25 10:52:51.912409] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.276 [2024-07-25 10:52:51.912419] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.276 [2024-07-25 10:52:51.912424] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.276 [2024-07-25 10:52:51.912428] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.276 [2024-07-25 10:52:51.912435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.276 [2024-07-25 10:52:51.912452] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.276 [2024-07-25 10:52:51.912515] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.276 [2024-07-25 10:52:51.912522] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.276 [2024-07-25 10:52:51.912525] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.276 [2024-07-25 10:52:51.912530] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.276 [2024-07-25 10:52:51.912540] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.276 [2024-07-25 10:52:51.912545] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.276 [2024-07-25 10:52:51.912548] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.276 [2024-07-25 10:52:51.912556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.276 [2024-07-25 10:52:51.912573] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.276 [2024-07-25 10:52:51.912622] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.276 [2024-07-25 10:52:51.912629] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.276 [2024-07-25 10:52:51.912632] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.276 [2024-07-25 10:52:51.912636] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.276 [2024-07-25 10:52:51.912647] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.276 [2024-07-25 10:52:51.912651] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.276 [2024-07-25 10:52:51.912655] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.276 [2024-07-25 10:52:51.912662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.276 [2024-07-25 10:52:51.912679] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.276 [2024-07-25 10:52:51.912730] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.276 [2024-07-25 10:52:51.912736] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.276 [2024-07-25 10:52:51.912740] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.276 [2024-07-25 10:52:51.912744] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.276 [2024-07-25 10:52:51.912754] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.276 [2024-07-25 10:52:51.912759] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.276 [2024-07-25 10:52:51.912763] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.276 [2024-07-25 10:52:51.912770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.276 [2024-07-25 10:52:51.912787] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.276 [2024-07-25 10:52:51.912835] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.276 [2024-07-25 10:52:51.912841] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.276 [2024-07-25 10:52:51.912845] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.276 [2024-07-25 10:52:51.912849] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.276 [2024-07-25 10:52:51.912870] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.276 [2024-07-25 10:52:51.912875] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.276 [2024-07-25 10:52:51.912879] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.276 [2024-07-25 10:52:51.912886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.276 [2024-07-25 10:52:51.912905] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.276 [2024-07-25 10:52:51.912956] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.276 [2024-07-25 10:52:51.912963] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.276 [2024-07-25 10:52:51.912967] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.276 [2024-07-25 10:52:51.912971] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.276 [2024-07-25 10:52:51.912982] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.276 [2024-07-25 10:52:51.912997] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.276 [2024-07-25 10:52:51.913001] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.276 [2024-07-25 10:52:51.913008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.276 [2024-07-25 10:52:51.913025] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.276 [2024-07-25 10:52:51.913073] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.277 [2024-07-25 10:52:51.913080] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.277 [2024-07-25 10:52:51.913084] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.277 [2024-07-25 10:52:51.913088] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.277 [2024-07-25 10:52:51.913098] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.277 [2024-07-25 10:52:51.913103] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.277 [2024-07-25 10:52:51.913107] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.277 [2024-07-25 10:52:51.913114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.277 [2024-07-25 10:52:51.913131] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.277 [2024-07-25 10:52:51.913178] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.277 [2024-07-25 10:52:51.913185] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.277 [2024-07-25 10:52:51.913188] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.277 [2024-07-25 10:52:51.913193] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.277 [2024-07-25 10:52:51.913203] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.277 [2024-07-25 10:52:51.913208] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.277 [2024-07-25 10:52:51.913212] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.277 [2024-07-25 10:52:51.913219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.277 [2024-07-25 10:52:51.913235] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.277 [2024-07-25 10:52:51.913286] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.277 [2024-07-25 10:52:51.913293] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.277 [2024-07-25 10:52:51.913297] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.277 [2024-07-25 10:52:51.913301] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.277 [2024-07-25 10:52:51.913311] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.277 [2024-07-25 10:52:51.913316] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.277 [2024-07-25 10:52:51.913320] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.277 [2024-07-25 10:52:51.913327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.277 [2024-07-25 10:52:51.913344] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.277 [2024-07-25 10:52:51.913394] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.277 [2024-07-25 10:52:51.913401] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.277 [2024-07-25 10:52:51.913405] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.277 [2024-07-25 10:52:51.913409] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.277 [2024-07-25 10:52:51.913419] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.277 [2024-07-25 10:52:51.913424] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.277 [2024-07-25 10:52:51.913428] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.277 [2024-07-25 10:52:51.913435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.277 [2024-07-25 10:52:51.913452] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.277 [2024-07-25 10:52:51.913512] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.277 [2024-07-25 10:52:51.913519] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.277 [2024-07-25 10:52:51.913522] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.277 [2024-07-25 10:52:51.913526] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.277 [2024-07-25 10:52:51.913537] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.277 [2024-07-25 10:52:51.913541] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.277 [2024-07-25 10:52:51.913545] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.277 [2024-07-25 10:52:51.913553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.277 [2024-07-25 10:52:51.913569] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.277 [2024-07-25 10:52:51.913621] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.277 [2024-07-25 10:52:51.913628] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.277 [2024-07-25 10:52:51.913632] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.277 [2024-07-25 10:52:51.913636] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.277 [2024-07-25 10:52:51.913647] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.277 [2024-07-25 10:52:51.913652] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.277 [2024-07-25 10:52:51.913655] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.277 [2024-07-25 10:52:51.913663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.277 [2024-07-25 10:52:51.913679] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.277 [2024-07-25 10:52:51.913724] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.277 [2024-07-25 10:52:51.913735] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.277 [2024-07-25 10:52:51.913739] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.277 [2024-07-25 10:52:51.913743] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.277 [2024-07-25 10:52:51.913754] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.277 [2024-07-25 10:52:51.913759] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.277 [2024-07-25 10:52:51.913763] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.277 [2024-07-25 10:52:51.913770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.277 [2024-07-25 10:52:51.913787] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.277 [2024-07-25 10:52:51.913834] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.277 [2024-07-25 10:52:51.913841] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.277 [2024-07-25 10:52:51.913844] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.277 [2024-07-25 10:52:51.913848] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.277 [2024-07-25 10:52:51.913873] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.277 [2024-07-25 10:52:51.913878] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.277 [2024-07-25 10:52:51.913882] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.277 [2024-07-25 10:52:51.913890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.277 [2024-07-25 10:52:51.913909] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.277 [2024-07-25 10:52:51.913961] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.277 [2024-07-25 10:52:51.913967] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.277 [2024-07-25 10:52:51.913971] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.277 [2024-07-25 10:52:51.913975] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.277 [2024-07-25 10:52:51.913996] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.277 [2024-07-25 10:52:51.914001] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.277 [2024-07-25 10:52:51.914005] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.277 [2024-07-25 10:52:51.914013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.277 [2024-07-25 10:52:51.914041] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.277 [2024-07-25 10:52:51.914093] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.277 [2024-07-25 10:52:51.914100] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.277 [2024-07-25 10:52:51.914104] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.277 [2024-07-25 10:52:51.914108] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.277 [2024-07-25 10:52:51.914121] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.277 [2024-07-25 10:52:51.914126] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.277 [2024-07-25 10:52:51.914130] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.277 [2024-07-25 10:52:51.914137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.277 [2024-07-25 10:52:51.914154] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.277 [2024-07-25 10:52:51.914199] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.277 [2024-07-25 10:52:51.914205] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.277 [2024-07-25 10:52:51.914209] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.277 [2024-07-25 10:52:51.914213] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.277 [2024-07-25 10:52:51.914224] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.277 [2024-07-25 10:52:51.914228] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.277 [2024-07-25 10:52:51.914232] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.277 [2024-07-25 10:52:51.914239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.277 [2024-07-25 10:52:51.914257] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.277 [2024-07-25 10:52:51.914311] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.277 [2024-07-25 10:52:51.914317] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.277 [2024-07-25 10:52:51.914321] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.277 [2024-07-25 10:52:51.914325] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.277 [2024-07-25 10:52:51.914335] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.277 [2024-07-25 10:52:51.914340] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.277 [2024-07-25 10:52:51.914344] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.278 [2024-07-25 10:52:51.914351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.278 [2024-07-25 10:52:51.914368] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.278 [2024-07-25 10:52:51.914413] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.278 [2024-07-25 10:52:51.914419] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.278 [2024-07-25 10:52:51.914423] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.278 [2024-07-25 10:52:51.914427] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.278 [2024-07-25 10:52:51.914437] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.278 [2024-07-25 10:52:51.914442] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.278 [2024-07-25 10:52:51.914446] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.278 [2024-07-25 10:52:51.914453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.278 [2024-07-25 10:52:51.914470] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.278 [2024-07-25 10:52:51.914533] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.278 [2024-07-25 10:52:51.914544] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.278 [2024-07-25 10:52:51.914548] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.278 [2024-07-25 10:52:51.914553] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.278 [2024-07-25 10:52:51.914563] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.278 [2024-07-25 10:52:51.914568] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.278 [2024-07-25 10:52:51.914572] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.278 [2024-07-25 10:52:51.914579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.278 [2024-07-25 10:52:51.914596] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.278 [2024-07-25 10:52:51.914648] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.278 [2024-07-25 10:52:51.914654] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.278 [2024-07-25 10:52:51.914658] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.278 [2024-07-25 10:52:51.914662] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.278 [2024-07-25 10:52:51.914673] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.278 [2024-07-25 10:52:51.914677] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.278 [2024-07-25 10:52:51.914681] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.278 [2024-07-25 10:52:51.914689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.278 [2024-07-25 10:52:51.914705] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.278 [2024-07-25 10:52:51.914754] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.278 [2024-07-25 10:52:51.914760] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.278 [2024-07-25 10:52:51.914764] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.278 [2024-07-25 10:52:51.914768] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.278 [2024-07-25 10:52:51.914778] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.278 [2024-07-25 10:52:51.914783] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.278 [2024-07-25 10:52:51.914787] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.278 [2024-07-25 10:52:51.914794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.278 [2024-07-25 10:52:51.914811] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.278 [2024-07-25 10:52:51.914867] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.278 [2024-07-25 10:52:51.914881] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.278 [2024-07-25 10:52:51.914886] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.278 [2024-07-25 10:52:51.914890] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.278 [2024-07-25 10:52:51.914901] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.278 [2024-07-25 10:52:51.914906] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.278 [2024-07-25 10:52:51.914910] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.278 [2024-07-25 10:52:51.914918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.278 [2024-07-25 10:52:51.914937] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.278 [2024-07-25 10:52:51.914989] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.278 [2024-07-25 10:52:51.915000] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.278 [2024-07-25 10:52:51.915004] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.278 [2024-07-25 10:52:51.915008] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.278 [2024-07-25 10:52:51.915019] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.278 [2024-07-25 10:52:51.915024] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.278 [2024-07-25 10:52:51.915028] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.278 [2024-07-25 10:52:51.915035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.278 [2024-07-25 10:52:51.915052] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.278 [2024-07-25 10:52:51.915098] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.278 [2024-07-25 10:52:51.915108] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.278 [2024-07-25 10:52:51.915112] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.278 [2024-07-25 10:52:51.915117] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.278 [2024-07-25 10:52:51.915127] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.278 [2024-07-25 10:52:51.915132] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.278 [2024-07-25 10:52:51.915136] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.278 [2024-07-25 10:52:51.915143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.278 [2024-07-25 10:52:51.915160] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.278 [2024-07-25 10:52:51.915209] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.278 [2024-07-25 10:52:51.915219] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.278 [2024-07-25 10:52:51.915223] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.278 [2024-07-25 10:52:51.915228] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.278 [2024-07-25 10:52:51.915238] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.278 [2024-07-25 10:52:51.915243] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.278 [2024-07-25 10:52:51.915247] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.278 [2024-07-25 10:52:51.915255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.278 [2024-07-25 10:52:51.915272] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.278 [2024-07-25 10:52:51.915320] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.278 [2024-07-25 10:52:51.915327] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.278 [2024-07-25 10:52:51.915330] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.278 [2024-07-25 10:52:51.915334] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.278 [2024-07-25 10:52:51.915345] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.278 [2024-07-25 10:52:51.915349] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.278 [2024-07-25 10:52:51.915353] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.278 [2024-07-25 10:52:51.915361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.278 [2024-07-25 10:52:51.915377] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.278 [2024-07-25 10:52:51.915422] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.278 [2024-07-25 10:52:51.915433] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.278 [2024-07-25 10:52:51.915437] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.278 [2024-07-25 10:52:51.915441] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.278 [2024-07-25 10:52:51.915452] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.278 [2024-07-25 10:52:51.915457] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.278 [2024-07-25 10:52:51.915461] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.278 [2024-07-25 10:52:51.915468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.278 [2024-07-25 10:52:51.915485] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.278 [2024-07-25 10:52:51.915540] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.278 [2024-07-25 10:52:51.915550] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.278 [2024-07-25 10:52:51.915554] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.278 [2024-07-25 10:52:51.915559] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.278 [2024-07-25 10:52:51.915569] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.278 [2024-07-25 10:52:51.915574] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.278 [2024-07-25 10:52:51.915578] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.278 [2024-07-25 10:52:51.915585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.278 [2024-07-25 10:52:51.915602] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.278 [2024-07-25 10:52:51.915650] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.278 [2024-07-25 10:52:51.915661] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.278 [2024-07-25 10:52:51.915665] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.278 [2024-07-25 10:52:51.915669] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.278 [2024-07-25 10:52:51.915680] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.279 [2024-07-25 10:52:51.915685] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.279 [2024-07-25 10:52:51.915689] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.279 [2024-07-25 10:52:51.915696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.279 [2024-07-25 10:52:51.915713] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.279 [2024-07-25 10:52:51.915764] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.279 [2024-07-25 10:52:51.915771] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.279 [2024-07-25 10:52:51.915775] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.279 [2024-07-25 10:52:51.915779] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.279 [2024-07-25 10:52:51.915789] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.279 [2024-07-25 10:52:51.915794] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.279 [2024-07-25 10:52:51.915798] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.279 [2024-07-25 10:52:51.915805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.279 [2024-07-25 10:52:51.915822] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.279 [2024-07-25 10:52:51.915884] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.279 [2024-07-25 10:52:51.915892] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.279 [2024-07-25 10:52:51.915896] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.279 [2024-07-25 10:52:51.915900] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.279 [2024-07-25 10:52:51.915911] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.279 [2024-07-25 10:52:51.915915] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.279 [2024-07-25 10:52:51.915919] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.279 [2024-07-25 10:52:51.915927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.279 [2024-07-25 10:52:51.915946] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.279 [2024-07-25 10:52:51.915992] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.279 [2024-07-25 10:52:51.915999] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.279 [2024-07-25 10:52:51.916003] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.279 [2024-07-25 10:52:51.916007] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.279 [2024-07-25 10:52:51.916017] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.279 [2024-07-25 10:52:51.916022] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.279 [2024-07-25 10:52:51.916026] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.279 [2024-07-25 10:52:51.916033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.279 [2024-07-25 10:52:51.916050] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.279 [2024-07-25 10:52:51.916102] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.279 [2024-07-25 10:52:51.916113] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.279 [2024-07-25 10:52:51.916117] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.279 [2024-07-25 10:52:51.916122] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.279 [2024-07-25 10:52:51.916133] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.279 [2024-07-25 10:52:51.916137] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.279 [2024-07-25 10:52:51.916141] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.279 [2024-07-25 10:52:51.916149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.279 [2024-07-25 10:52:51.916166] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.279 [2024-07-25 10:52:51.916212] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.279 [2024-07-25 10:52:51.916218] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.279 [2024-07-25 10:52:51.916222] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.279 [2024-07-25 10:52:51.916226] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.279 [2024-07-25 10:52:51.916237] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.279 [2024-07-25 10:52:51.916242] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.279 [2024-07-25 10:52:51.916246] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.279 [2024-07-25 10:52:51.916253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.279 [2024-07-25 10:52:51.916269] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.279 [2024-07-25 10:52:51.916322] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.279 [2024-07-25 10:52:51.916328] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.279 [2024-07-25 10:52:51.916332] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.279 [2024-07-25 10:52:51.916336] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.279 [2024-07-25 10:52:51.916346] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.279 [2024-07-25 10:52:51.916351] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.279 [2024-07-25 10:52:51.916355] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.279 [2024-07-25 10:52:51.916363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.279 [2024-07-25 10:52:51.916379] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.279 [2024-07-25 10:52:51.916433] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.279 [2024-07-25 10:52:51.916440] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.279 [2024-07-25 10:52:51.916443] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.279 [2024-07-25 10:52:51.916448] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.279 [2024-07-25 10:52:51.916458] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.279 [2024-07-25 10:52:51.916463] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.279 [2024-07-25 10:52:51.916467] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.279 [2024-07-25 10:52:51.916474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.279 [2024-07-25 10:52:51.916490] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.279 [2024-07-25 10:52:51.916545] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.279 [2024-07-25 10:52:51.916552] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.279 [2024-07-25 10:52:51.916556] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.279 [2024-07-25 10:52:51.916560] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.279 [2024-07-25 10:52:51.916570] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.279 [2024-07-25 10:52:51.916575] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.279 [2024-07-25 10:52:51.916579] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.279 [2024-07-25 10:52:51.916586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.279 [2024-07-25 10:52:51.916602] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.279 [2024-07-25 10:52:51.916650] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.279 [2024-07-25 10:52:51.916656] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.279 [2024-07-25 10:52:51.916660] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.279 [2024-07-25 10:52:51.916664] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.279 [2024-07-25 10:52:51.916675] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.279 [2024-07-25 10:52:51.916679] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.279 [2024-07-25 10:52:51.916683] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.279 [2024-07-25 10:52:51.916691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.279 [2024-07-25 10:52:51.916707] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.279 [2024-07-25 10:52:51.916756] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.279 [2024-07-25 10:52:51.916762] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.280 [2024-07-25 10:52:51.916766] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.280 [2024-07-25 10:52:51.916770] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.280 [2024-07-25 10:52:51.916781] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.280 [2024-07-25 10:52:51.916785] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.280 [2024-07-25 10:52:51.916789] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.280 [2024-07-25 10:52:51.916797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.280 [2024-07-25 10:52:51.916813] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.280 [2024-07-25 10:52:51.916871] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.280 [2024-07-25 10:52:51.916882] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.280 [2024-07-25 10:52:51.916886] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.280 [2024-07-25 10:52:51.916890] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.280 [2024-07-25 10:52:51.916901] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.280 [2024-07-25 10:52:51.916906] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.280 [2024-07-25 10:52:51.916910] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.280 [2024-07-25 10:52:51.916918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.280 [2024-07-25 10:52:51.916936] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.280 [2024-07-25 10:52:51.916984] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.280 [2024-07-25 10:52:51.916995] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.280 [2024-07-25 10:52:51.916999] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.280 [2024-07-25 10:52:51.917003] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.280 [2024-07-25 10:52:51.917014] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.280 [2024-07-25 10:52:51.917019] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.280 [2024-07-25 10:52:51.917023] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.280 [2024-07-25 10:52:51.917030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.280 [2024-07-25 10:52:51.917048] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.280 [2024-07-25 10:52:51.917100] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.280 [2024-07-25 10:52:51.917110] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.280 [2024-07-25 10:52:51.917115] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.280 [2024-07-25 10:52:51.917119] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.280 [2024-07-25 10:52:51.917130] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.280 [2024-07-25 10:52:51.917134] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.280 [2024-07-25 10:52:51.917138] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.280 [2024-07-25 10:52:51.917146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.280 [2024-07-25 10:52:51.917163] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.280 [2024-07-25 10:52:51.917214] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.280 [2024-07-25 10:52:51.917221] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.280 [2024-07-25 10:52:51.917225] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.280 [2024-07-25 10:52:51.917229] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.280 [2024-07-25 10:52:51.917239] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.280 [2024-07-25 10:52:51.917244] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.280 [2024-07-25 10:52:51.917248] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.280 [2024-07-25 10:52:51.917255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.280 [2024-07-25 10:52:51.917272] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.280 [2024-07-25 10:52:51.917318] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.280 [2024-07-25 10:52:51.917324] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.280 [2024-07-25 10:52:51.917328] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.280 [2024-07-25 10:52:51.917332] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.280 [2024-07-25 10:52:51.917342] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.280 [2024-07-25 10:52:51.917347] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.280 [2024-07-25 10:52:51.917351] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.280 [2024-07-25 10:52:51.917358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.280 [2024-07-25 10:52:51.917374] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.280 [2024-07-25 10:52:51.917425] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.280 [2024-07-25 10:52:51.917436] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.280 [2024-07-25 10:52:51.917440] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.280 [2024-07-25 10:52:51.917445] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.280 [2024-07-25 10:52:51.917455] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.280 [2024-07-25 10:52:51.917463] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.280 [2024-07-25 10:52:51.917467] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.280 [2024-07-25 10:52:51.917474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.280 [2024-07-25 10:52:51.917491] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.280 [2024-07-25 10:52:51.917548] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.280 [2024-07-25 10:52:51.917555] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.280 [2024-07-25 10:52:51.917559] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.280 [2024-07-25 10:52:51.917563] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.280 [2024-07-25 10:52:51.917573] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.280 [2024-07-25 10:52:51.917578] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.280 [2024-07-25 10:52:51.917582] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.280 [2024-07-25 10:52:51.917589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.280 [2024-07-25 10:52:51.917608] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.280 [2024-07-25 10:52:51.917654] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.280 [2024-07-25 10:52:51.917661] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.280 [2024-07-25 10:52:51.917665] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.280 [2024-07-25 10:52:51.917669] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.280 [2024-07-25 10:52:51.917679] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.280 [2024-07-25 10:52:51.917684] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.280 [2024-07-25 10:52:51.917687] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.280 [2024-07-25 10:52:51.917695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.280 [2024-07-25 10:52:51.917710] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.280 [2024-07-25 10:52:51.917768] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.280 [2024-07-25 10:52:51.917778] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.280 [2024-07-25 10:52:51.917782] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.280 [2024-07-25 10:52:51.917787] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.280 [2024-07-25 10:52:51.917797] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.280 [2024-07-25 10:52:51.917802] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.280 [2024-07-25 10:52:51.917806] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.280 [2024-07-25 10:52:51.917813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.280 [2024-07-25 10:52:51.917831] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.280 [2024-07-25 10:52:51.921868] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.280 [2024-07-25 10:52:51.921888] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.280 [2024-07-25 10:52:51.921904] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.280 [2024-07-25 10:52:51.921908] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.280 [2024-07-25 10:52:51.921922] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:22.280 [2024-07-25 10:52:51.921927] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:22.280 [2024-07-25 10:52:51.921931] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b272c0) 00:14:22.280 [2024-07-25 10:52:51.921939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.280 [2024-07-25 10:52:51.921963] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68dc0, cid 3, qid 0 00:14:22.280 [2024-07-25 10:52:51.922030] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:22.280 [2024-07-25 10:52:51.922038] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:22.280 [2024-07-25 10:52:51.922042] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:22.280 [2024-07-25 10:52:51.922046] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68dc0) on tqpair=0x1b272c0 00:14:22.280 [2024-07-25 10:52:51.922054] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 11 milliseconds 00:14:22.280 0% 00:14:22.280 Data Units Read: 0 00:14:22.280 Data Units Written: 0 00:14:22.280 Host Read Commands: 0 00:14:22.280 Host Write Commands: 0 00:14:22.280 Controller Busy Time: 0 minutes 00:14:22.280 Power Cycles: 0 00:14:22.281 Power On Hours: 0 hours 00:14:22.281 Unsafe Shutdowns: 0 00:14:22.281 Unrecoverable Media Errors: 0 00:14:22.281 Lifetime Error Log Entries: 0 00:14:22.281 Warning Temperature Time: 0 minutes 00:14:22.281 Critical Temperature Time: 0 minutes 00:14:22.281 00:14:22.281 Number of Queues 00:14:22.281 ================ 00:14:22.281 Number of I/O Submission Queues: 127 00:14:22.281 Number of I/O Completion Queues: 127 00:14:22.281 00:14:22.281 Active Namespaces 00:14:22.281 ================= 00:14:22.281 Namespace ID:1 00:14:22.281 Error Recovery Timeout: Unlimited 00:14:22.281 Command Set Identifier: NVM (00h) 00:14:22.281 Deallocate: Supported 00:14:22.281 Deallocated/Unwritten Error: Not Supported 00:14:22.281 Deallocated Read Value: Unknown 00:14:22.281 Deallocate in Write Zeroes: Not Supported 00:14:22.281 Deallocated Guard Field: 0xFFFF 00:14:22.281 Flush: Supported 00:14:22.281 Reservation: Supported 00:14:22.281 Namespace Sharing Capabilities: Multiple Controllers 00:14:22.281 Size (in LBAs): 131072 (0GiB) 00:14:22.281 Capacity (in LBAs): 131072 (0GiB) 00:14:22.281 Utilization (in LBAs): 131072 (0GiB) 00:14:22.281 NGUID: ABCDEF0123456789ABCDEF0123456789 00:14:22.281 EUI64: ABCDEF0123456789 00:14:22.281 UUID: f03e8d3f-5761-47ac-947a-8e59bcaebd65 00:14:22.281 Thin Provisioning: Not Supported 00:14:22.281 Per-NS Atomic Units: Yes 00:14:22.281 Atomic Boundary Size (Normal): 0 00:14:22.281 Atomic Boundary Size (PFail): 0 00:14:22.281 Atomic Boundary Offset: 0 00:14:22.281 Maximum Single Source Range Length: 65535 00:14:22.281 Maximum Copy Length: 65535 00:14:22.281 Maximum Source Range Count: 1 00:14:22.281 NGUID/EUI64 Never Reused: No 00:14:22.281 Namespace Write Protected: No 00:14:22.281 Number of LBA Formats: 1 00:14:22.281 Current LBA Format: LBA Format #00 00:14:22.281 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:22.281 00:14:22.281 10:52:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:14:22.281 10:52:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:22.281 10:52:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.281 10:52:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:22.281 10:52:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.281 10:52:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:14:22.281 10:52:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:14:22.281 10:52:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:22.281 10:52:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:14:22.540 10:52:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:22.540 10:52:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:14:22.540 10:52:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:22.540 10:52:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:22.540 rmmod nvme_tcp 00:14:22.540 rmmod nvme_fabrics 00:14:22.540 rmmod nvme_keyring 00:14:22.540 10:52:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:22.540 10:52:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:14:22.540 10:52:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:14:22.540 10:52:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 74179 ']' 00:14:22.540 10:52:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 74179 00:14:22.540 10:52:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 74179 ']' 00:14:22.540 10:52:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 74179 00:14:22.540 10:52:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:14:22.540 10:52:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:22.540 10:52:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74179 00:14:22.540 10:52:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:22.540 10:52:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:22.540 killing process with pid 74179 00:14:22.540 10:52:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74179' 00:14:22.540 10:52:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 74179 00:14:22.540 10:52:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 74179 00:14:22.799 10:52:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:22.799 10:52:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:22.799 10:52:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:22.799 10:52:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:22.799 10:52:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:22.799 10:52:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.799 10:52:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:22.799 10:52:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.799 10:52:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:22.799 00:14:22.799 real 0m2.509s 00:14:22.799 user 0m6.919s 00:14:22.799 sys 0m0.650s 00:14:22.800 10:52:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:22.800 10:52:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:22.800 ************************************ 00:14:22.800 END TEST nvmf_identify 00:14:22.800 ************************************ 00:14:22.800 10:52:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:22.800 10:52:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:22.800 10:52:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:22.800 10:52:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:22.800 ************************************ 00:14:22.800 START TEST nvmf_perf 00:14:22.800 ************************************ 00:14:22.800 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:22.800 * Looking for test storage... 00:14:22.800 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:22.800 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:22.800 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=bb4b8bd3-cfb4-4368-bf29-91254747069c 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:23.060 Cannot find device "nvmf_tgt_br" 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # true 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:23.060 Cannot find device "nvmf_tgt_br2" 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # true 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:23.060 Cannot find device "nvmf_tgt_br" 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # true 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:23.060 Cannot find device "nvmf_tgt_br2" 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # true 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:23.060 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:23.060 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:23.060 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:23.320 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:23.320 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:23.320 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:23.320 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:23.320 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:23.320 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:23.320 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:23.320 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:23.320 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:23.320 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:23.320 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:23.320 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:23.321 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:23.321 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:23.321 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:23.321 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:23.321 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:23.321 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:14:23.321 00:14:23.321 --- 10.0.0.2 ping statistics --- 00:14:23.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.321 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:14:23.321 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:23.321 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:23.321 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:14:23.321 00:14:23.321 --- 10.0.0.3 ping statistics --- 00:14:23.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.321 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:14:23.321 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:23.321 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:23.321 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:14:23.321 00:14:23.321 --- 10.0.0.1 ping statistics --- 00:14:23.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.321 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:14:23.321 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:23.321 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:14:23.321 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:23.321 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:23.321 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:23.321 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:23.321 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:23.321 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:23.321 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:23.321 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:14:23.321 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:23.321 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:23.321 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:23.321 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=74380 00:14:23.321 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 74380 00:14:23.321 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 74380 ']' 00:14:23.321 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.321 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:23.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.321 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.321 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:23.321 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:23.321 10:52:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:23.321 [2024-07-25 10:52:53.004131] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:23.321 [2024-07-25 10:52:53.004246] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:23.579 [2024-07-25 10:52:53.148195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:23.579 [2024-07-25 10:52:53.264593] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:23.579 [2024-07-25 10:52:53.264680] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:23.579 [2024-07-25 10:52:53.264708] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:23.579 [2024-07-25 10:52:53.264717] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:23.579 [2024-07-25 10:52:53.264729] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:23.579 [2024-07-25 10:52:53.265529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:23.579 [2024-07-25 10:52:53.265717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:23.579 [2024-07-25 10:52:53.265808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:23.579 [2024-07-25 10:52:53.265811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.838 [2024-07-25 10:52:53.321510] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:24.405 10:52:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:24.405 10:52:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:14:24.405 10:52:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:24.405 10:52:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:24.405 10:52:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:24.405 10:52:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:24.405 10:52:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:14:24.405 10:52:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:24.972 10:52:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:14:24.972 10:52:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:14:25.231 10:52:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:14:25.231 10:52:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:25.489 10:52:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:14:25.489 10:52:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:14:25.489 10:52:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:14:25.489 10:52:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:14:25.489 10:52:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:25.748 [2024-07-25 10:52:55.293084] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:25.748 10:52:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:26.006 10:52:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:26.006 10:52:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:26.264 10:52:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:26.264 10:52:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:14:26.522 10:52:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:26.522 [2024-07-25 10:52:56.227853] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:26.522 10:52:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:26.780 10:52:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:14:26.780 10:52:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:26.780 10:52:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:14:26.780 10:52:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:28.184 Initializing NVMe Controllers 00:14:28.184 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:28.184 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:28.184 Initialization complete. Launching workers. 00:14:28.184 ======================================================== 00:14:28.184 Latency(us) 00:14:28.184 Device Information : IOPS MiB/s Average min max 00:14:28.184 PCIE (0000:00:10.0) NSID 1 from core 0: 22011.23 85.98 1452.80 254.89 8222.98 00:14:28.184 ======================================================== 00:14:28.184 Total : 22011.23 85.98 1452.80 254.89 8222.98 00:14:28.184 00:14:28.184 10:52:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:29.121 Initializing NVMe Controllers 00:14:29.121 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:29.121 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:29.121 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:29.121 Initialization complete. Launching workers. 00:14:29.121 ======================================================== 00:14:29.121 Latency(us) 00:14:29.121 Device Information : IOPS MiB/s Average min max 00:14:29.121 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3712.22 14.50 269.06 96.26 7201.51 00:14:29.121 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 122.88 0.48 8194.95 5981.92 15061.86 00:14:29.121 ======================================================== 00:14:29.122 Total : 3835.10 14.98 523.00 96.26 15061.86 00:14:29.122 00:14:29.380 10:52:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:30.759 Initializing NVMe Controllers 00:14:30.759 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:30.759 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:30.759 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:30.759 Initialization complete. Launching workers. 00:14:30.759 ======================================================== 00:14:30.759 Latency(us) 00:14:30.759 Device Information : IOPS MiB/s Average min max 00:14:30.759 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8749.22 34.18 3657.64 524.90 8843.84 00:14:30.759 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3920.20 15.31 8173.36 4711.74 16345.40 00:14:30.759 ======================================================== 00:14:30.759 Total : 12669.42 49.49 5054.91 524.90 16345.40 00:14:30.759 00:14:30.759 10:53:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:14:30.759 10:53:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:33.294 Initializing NVMe Controllers 00:14:33.294 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:33.294 Controller IO queue size 128, less than required. 00:14:33.294 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:33.294 Controller IO queue size 128, less than required. 00:14:33.294 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:33.294 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:33.294 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:33.294 Initialization complete. Launching workers. 00:14:33.294 ======================================================== 00:14:33.294 Latency(us) 00:14:33.294 Device Information : IOPS MiB/s Average min max 00:14:33.294 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1819.44 454.86 71335.11 41817.52 101627.40 00:14:33.294 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 673.48 168.37 195744.26 48142.37 336495.23 00:14:33.294 ======================================================== 00:14:33.294 Total : 2492.92 623.23 104945.04 41817.52 336495.23 00:14:33.294 00:14:33.294 10:53:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:14:33.573 Initializing NVMe Controllers 00:14:33.573 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:33.573 Controller IO queue size 128, less than required. 00:14:33.573 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:33.573 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:14:33.573 Controller IO queue size 128, less than required. 00:14:33.573 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:33.573 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:14:33.573 WARNING: Some requested NVMe devices were skipped 00:14:33.573 No valid NVMe controllers or AIO or URING devices found 00:14:33.573 10:53:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:14:36.125 Initializing NVMe Controllers 00:14:36.125 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:36.125 Controller IO queue size 128, less than required. 00:14:36.125 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:36.125 Controller IO queue size 128, less than required. 00:14:36.125 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:36.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:36.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:36.125 Initialization complete. Launching workers. 00:14:36.125 00:14:36.125 ==================== 00:14:36.125 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:14:36.125 TCP transport: 00:14:36.125 polls: 11029 00:14:36.125 idle_polls: 7382 00:14:36.125 sock_completions: 3647 00:14:36.125 nvme_completions: 6083 00:14:36.125 submitted_requests: 9082 00:14:36.125 queued_requests: 1 00:14:36.125 00:14:36.125 ==================== 00:14:36.125 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:14:36.125 TCP transport: 00:14:36.125 polls: 12168 00:14:36.125 idle_polls: 7800 00:14:36.125 sock_completions: 4368 00:14:36.125 nvme_completions: 6715 00:14:36.125 submitted_requests: 10082 00:14:36.125 queued_requests: 1 00:14:36.125 ======================================================== 00:14:36.125 Latency(us) 00:14:36.125 Device Information : IOPS MiB/s Average min max 00:14:36.125 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1520.46 380.12 86264.68 41240.34 145108.10 00:14:36.125 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1678.46 419.61 76704.00 31882.14 103215.70 00:14:36.125 ======================================================== 00:14:36.125 Total : 3198.92 799.73 81248.23 31882.14 145108.10 00:14:36.125 00:14:36.125 10:53:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:14:36.125 10:53:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:36.384 10:53:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:14:36.384 10:53:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:14:36.384 10:53:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:14:36.384 10:53:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:36.384 10:53:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:14:36.384 10:53:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:36.384 10:53:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:14:36.384 10:53:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:36.384 10:53:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:36.384 rmmod nvme_tcp 00:14:36.644 rmmod nvme_fabrics 00:14:36.644 rmmod nvme_keyring 00:14:36.644 10:53:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:36.644 10:53:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:14:36.644 10:53:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:14:36.644 10:53:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 74380 ']' 00:14:36.644 10:53:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 74380 00:14:36.644 10:53:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 74380 ']' 00:14:36.644 10:53:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 74380 00:14:36.644 10:53:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:14:36.644 10:53:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:36.644 10:53:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74380 00:14:36.644 killing process with pid 74380 00:14:36.644 10:53:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:36.644 10:53:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:36.644 10:53:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74380' 00:14:36.644 10:53:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 74380 00:14:36.644 10:53:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 74380 00:14:37.211 10:53:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:37.211 10:53:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:37.211 10:53:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:37.211 10:53:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:37.211 10:53:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:37.211 10:53:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:37.211 10:53:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:37.211 10:53:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.471 10:53:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:37.471 00:14:37.471 real 0m14.513s 00:14:37.471 user 0m52.438s 00:14:37.471 sys 0m4.288s 00:14:37.471 10:53:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:37.471 10:53:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:37.471 ************************************ 00:14:37.471 END TEST nvmf_perf 00:14:37.471 ************************************ 00:14:37.471 10:53:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:37.471 10:53:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:37.471 10:53:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:37.471 10:53:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:37.471 ************************************ 00:14:37.471 START TEST nvmf_fio_host 00:14:37.471 ************************************ 00:14:37.471 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:37.471 * Looking for test storage... 00:14:37.471 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:37.471 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:37.471 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:37.471 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:37.471 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:37.471 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.471 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.471 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.471 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:14:37.471 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.471 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:37.471 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:14:37.471 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:37.471 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:37.471 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:37.471 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:37.471 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:37.471 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:37.471 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:37.471 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:37.471 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:37.471 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:37.471 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:14:37.471 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=bb4b8bd3-cfb4-4368-bf29-91254747069c 00:14:37.471 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:37.471 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:37.471 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:37.471 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:37.471 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:37.471 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:37.471 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:37.471 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:37.471 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.471 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.472 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.472 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:14:37.472 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.472 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:14:37.472 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:37.472 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:37.472 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:37.472 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:37.472 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:37.472 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:37.472 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:37.472 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:37.472 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:37.472 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:14:37.472 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:37.472 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:37.472 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:37.472 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:37.472 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:37.472 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:37.472 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:37.472 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.472 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:37.472 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:37.472 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:37.472 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:37.472 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:37.472 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:37.472 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:37.472 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:37.472 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:37.472 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:37.472 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:37.472 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:37.472 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:37.472 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:37.472 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:37.472 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:37.472 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:37.472 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:37.472 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:37.472 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:37.472 Cannot find device "nvmf_tgt_br" 00:14:37.472 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:14:37.472 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:37.732 Cannot find device "nvmf_tgt_br2" 00:14:37.732 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:14:37.732 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:37.732 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:37.732 Cannot find device "nvmf_tgt_br" 00:14:37.732 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:14:37.732 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:37.732 Cannot find device "nvmf_tgt_br2" 00:14:37.732 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:14:37.732 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:37.732 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:37.732 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:37.732 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:37.732 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:14:37.732 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:37.732 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:37.732 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:14:37.732 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:37.732 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:37.732 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:37.732 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:37.732 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:37.732 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:37.732 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:37.732 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:37.732 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:37.732 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:37.732 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:37.732 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:37.732 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:37.732 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:37.732 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:37.732 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:37.732 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:37.732 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:37.732 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:37.732 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:37.991 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:37.991 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:37.991 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:37.991 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:37.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:37.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:14:37.991 00:14:37.991 --- 10.0.0.2 ping statistics --- 00:14:37.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.991 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:14:37.991 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:37.991 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:37.991 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:14:37.991 00:14:37.991 --- 10.0.0.3 ping statistics --- 00:14:37.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.991 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:14:37.991 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:37.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:37.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:14:37.991 00:14:37.991 --- 10.0.0.1 ping statistics --- 00:14:37.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.991 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:14:37.991 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:37.991 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:14:37.991 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:37.991 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:37.991 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:37.991 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:37.991 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:37.991 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:37.991 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:37.991 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:14:37.992 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:14:37.992 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:37.992 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:37.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.992 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=74806 00:14:37.992 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:37.992 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 74806 00:14:37.992 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 74806 ']' 00:14:37.992 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:37.992 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.992 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:37.992 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.992 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:37.992 10:53:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:37.992 [2024-07-25 10:53:07.601495] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:37.992 [2024-07-25 10:53:07.601616] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.251 [2024-07-25 10:53:07.750052] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:38.251 [2024-07-25 10:53:07.875877] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.251 [2024-07-25 10:53:07.876197] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.251 [2024-07-25 10:53:07.876284] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:38.251 [2024-07-25 10:53:07.876385] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:38.251 [2024-07-25 10:53:07.876568] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.251 [2024-07-25 10:53:07.876817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.251 [2024-07-25 10:53:07.876947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:38.251 [2024-07-25 10:53:07.877023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:38.251 [2024-07-25 10:53:07.877027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.251 [2024-07-25 10:53:07.934925] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:39.186 10:53:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:39.186 10:53:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:14:39.186 10:53:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:39.186 [2024-07-25 10:53:08.876399] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:39.186 10:53:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:14:39.186 10:53:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:39.186 10:53:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:39.445 10:53:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:39.704 Malloc1 00:14:39.704 10:53:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:39.962 10:53:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:40.221 10:53:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:40.479 [2024-07-25 10:53:09.970731] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:40.479 10:53:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:40.738 10:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:14:40.738 10:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:40.738 10:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:40.738 10:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:40.738 10:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:40.738 10:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:40.738 10:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:40.738 10:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:14:40.738 10:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:40.738 10:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:40.738 10:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:14:40.738 10:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:40.738 10:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:40.738 10:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:14:40.738 10:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:14:40.738 10:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:40.738 10:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:40.738 10:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:40.738 10:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:14:40.738 10:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:14:40.738 10:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:14:40.738 10:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:40.738 10:53:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:40.738 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:40.738 fio-3.35 00:14:40.738 Starting 1 thread 00:14:43.275 00:14:43.275 test: (groupid=0, jobs=1): err= 0: pid=74890: Thu Jul 25 10:53:12 2024 00:14:43.275 read: IOPS=8898, BW=34.8MiB/s (36.4MB/s)(69.8MiB/2007msec) 00:14:43.275 slat (usec): min=2, max=375, avg= 2.56, stdev= 3.60 00:14:43.275 clat (usec): min=2617, max=13559, avg=7468.81, stdev=567.07 00:14:43.275 lat (usec): min=2664, max=13561, avg=7471.37, stdev=566.85 00:14:43.275 clat percentiles (usec): 00:14:43.275 | 1.00th=[ 6194], 5.00th=[ 6718], 10.00th=[ 6915], 20.00th=[ 7111], 00:14:43.275 | 30.00th=[ 7242], 40.00th=[ 7373], 50.00th=[ 7439], 60.00th=[ 7570], 00:14:43.275 | 70.00th=[ 7701], 80.00th=[ 7832], 90.00th=[ 8029], 95.00th=[ 8225], 00:14:43.275 | 99.00th=[ 9110], 99.50th=[10028], 99.90th=[11994], 99.95th=[12911], 00:14:43.275 | 99.99th=[13566] 00:14:43.275 bw ( KiB/s): min=34896, max=36048, per=100.00%, avg=35592.00, stdev=523.29, samples=4 00:14:43.275 iops : min= 8724, max= 9012, avg=8898.00, stdev=130.82, samples=4 00:14:43.275 write: IOPS=8913, BW=34.8MiB/s (36.5MB/s)(69.9MiB/2007msec); 0 zone resets 00:14:43.275 slat (usec): min=2, max=290, avg= 2.64, stdev= 3.02 00:14:43.275 clat (usec): min=2470, max=13481, avg=6838.58, stdev=529.08 00:14:43.275 lat (usec): min=2491, max=13483, avg=6841.22, stdev=528.97 00:14:43.275 clat percentiles (usec): 00:14:43.275 | 1.00th=[ 5735], 5.00th=[ 6194], 10.00th=[ 6325], 20.00th=[ 6521], 00:14:43.275 | 30.00th=[ 6652], 40.00th=[ 6718], 50.00th=[ 6849], 60.00th=[ 6915], 00:14:43.275 | 70.00th=[ 7046], 80.00th=[ 7177], 90.00th=[ 7308], 95.00th=[ 7504], 00:14:43.275 | 99.00th=[ 8586], 99.50th=[ 9765], 99.90th=[11207], 99.95th=[12649], 00:14:43.275 | 99.99th=[13435] 00:14:43.275 bw ( KiB/s): min=35344, max=35952, per=99.99%, avg=35650.00, stdev=254.36, samples=4 00:14:43.275 iops : min= 8836, max= 8988, avg=8912.50, stdev=63.59, samples=4 00:14:43.275 lat (msec) : 4=0.08%, 10=99.48%, 20=0.44% 00:14:43.275 cpu : usr=66.95%, sys=23.63%, ctx=53, majf=0, minf=7 00:14:43.275 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:14:43.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:43.275 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:43.275 issued rwts: total=17859,17889,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:43.275 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:43.275 00:14:43.275 Run status group 0 (all jobs): 00:14:43.275 READ: bw=34.8MiB/s (36.4MB/s), 34.8MiB/s-34.8MiB/s (36.4MB/s-36.4MB/s), io=69.8MiB (73.1MB), run=2007-2007msec 00:14:43.275 WRITE: bw=34.8MiB/s (36.5MB/s), 34.8MiB/s-34.8MiB/s (36.5MB/s-36.5MB/s), io=69.9MiB (73.3MB), run=2007-2007msec 00:14:43.275 10:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:43.275 10:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:43.275 10:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:43.275 10:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:43.275 10:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:43.275 10:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:43.275 10:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:14:43.275 10:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:43.275 10:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:43.275 10:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:43.275 10:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:14:43.275 10:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:43.275 10:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:14:43.275 10:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:14:43.275 10:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:43.275 10:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:43.275 10:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:14:43.275 10:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:43.275 10:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:14:43.275 10:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:14:43.275 10:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:43.275 10:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:43.275 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:14:43.275 fio-3.35 00:14:43.275 Starting 1 thread 00:14:45.804 00:14:45.804 test: (groupid=0, jobs=1): err= 0: pid=74939: Thu Jul 25 10:53:15 2024 00:14:45.804 read: IOPS=8205, BW=128MiB/s (134MB/s)(257MiB/2006msec) 00:14:45.804 slat (usec): min=3, max=126, avg= 3.75, stdev= 1.74 00:14:45.804 clat (usec): min=2320, max=17345, avg=8620.94, stdev=2621.24 00:14:45.804 lat (usec): min=2324, max=17348, avg=8624.69, stdev=2621.30 00:14:45.804 clat percentiles (usec): 00:14:45.804 | 1.00th=[ 4047], 5.00th=[ 4948], 10.00th=[ 5473], 20.00th=[ 6325], 00:14:45.804 | 30.00th=[ 7111], 40.00th=[ 7701], 50.00th=[ 8356], 60.00th=[ 8979], 00:14:45.804 | 70.00th=[ 9634], 80.00th=[10552], 90.00th=[12125], 95.00th=[13698], 00:14:45.804 | 99.00th=[16057], 99.50th=[16581], 99.90th=[16909], 99.95th=[17171], 00:14:45.804 | 99.99th=[17171] 00:14:45.804 bw ( KiB/s): min=61664, max=78496, per=52.51%, avg=68944.00, stdev=7624.04, samples=4 00:14:45.804 iops : min= 3854, max= 4906, avg=4309.00, stdev=476.50, samples=4 00:14:45.804 write: IOPS=4900, BW=76.6MiB/s (80.3MB/s)(141MiB/1843msec); 0 zone resets 00:14:45.804 slat (usec): min=36, max=359, avg=39.13, stdev= 7.56 00:14:45.804 clat (usec): min=3092, max=22008, avg=12100.11, stdev=2033.43 00:14:45.804 lat (usec): min=3129, max=22046, avg=12139.24, stdev=2034.52 00:14:45.804 clat percentiles (usec): 00:14:45.804 | 1.00th=[ 7832], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[10290], 00:14:45.804 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11863], 60.00th=[12518], 00:14:45.804 | 70.00th=[13042], 80.00th=[13960], 90.00th=[15008], 95.00th=[15795], 00:14:45.804 | 99.00th=[16581], 99.50th=[16909], 99.90th=[17957], 99.95th=[18482], 00:14:45.804 | 99.99th=[21890] 00:14:45.804 bw ( KiB/s): min=63744, max=82016, per=91.43%, avg=71688.00, stdev=7997.63, samples=4 00:14:45.804 iops : min= 3984, max= 5126, avg=4480.50, stdev=499.85, samples=4 00:14:45.804 lat (msec) : 4=0.58%, 10=51.76%, 20=47.66%, 50=0.01% 00:14:45.804 cpu : usr=80.45%, sys=14.91%, ctx=16, majf=0, minf=14 00:14:45.804 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:14:45.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:45.804 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:45.804 issued rwts: total=16461,9032,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:45.804 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:45.804 00:14:45.804 Run status group 0 (all jobs): 00:14:45.804 READ: bw=128MiB/s (134MB/s), 128MiB/s-128MiB/s (134MB/s-134MB/s), io=257MiB (270MB), run=2006-2006msec 00:14:45.804 WRITE: bw=76.6MiB/s (80.3MB/s), 76.6MiB/s-76.6MiB/s (80.3MB/s-80.3MB/s), io=141MiB (148MB), run=1843-1843msec 00:14:45.804 10:53:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:45.804 10:53:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:14:45.804 10:53:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:14:45.804 10:53:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:14:45.804 10:53:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:14:45.804 10:53:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:45.805 10:53:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:14:46.063 10:53:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:46.063 10:53:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:14:46.063 10:53:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:46.063 10:53:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:46.063 rmmod nvme_tcp 00:14:46.063 rmmod nvme_fabrics 00:14:46.063 rmmod nvme_keyring 00:14:46.063 10:53:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:46.063 10:53:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:14:46.063 10:53:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:14:46.063 10:53:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 74806 ']' 00:14:46.063 10:53:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 74806 00:14:46.063 10:53:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 74806 ']' 00:14:46.063 10:53:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 74806 00:14:46.063 10:53:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:14:46.063 10:53:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:46.063 10:53:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74806 00:14:46.063 killing process with pid 74806 00:14:46.063 10:53:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:46.063 10:53:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:46.063 10:53:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74806' 00:14:46.063 10:53:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 74806 00:14:46.063 10:53:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 74806 00:14:46.323 10:53:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:46.323 10:53:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:46.323 10:53:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:46.323 10:53:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:46.323 10:53:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:46.323 10:53:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.323 10:53:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:46.323 10:53:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:46.323 10:53:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:46.323 00:14:46.323 real 0m8.958s 00:14:46.323 user 0m36.075s 00:14:46.323 sys 0m2.477s 00:14:46.323 10:53:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:46.323 10:53:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:46.323 ************************************ 00:14:46.323 END TEST nvmf_fio_host 00:14:46.323 ************************************ 00:14:46.323 10:53:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:14:46.323 10:53:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:46.323 10:53:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:46.323 10:53:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:46.323 ************************************ 00:14:46.323 START TEST nvmf_failover 00:14:46.323 ************************************ 00:14:46.323 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:14:46.583 * Looking for test storage... 00:14:46.583 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:46.583 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=bb4b8bd3-cfb4-4368-bf29-91254747069c 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:46.584 Cannot find device "nvmf_tgt_br" 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # true 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:46.584 Cannot find device "nvmf_tgt_br2" 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # true 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:46.584 Cannot find device "nvmf_tgt_br" 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # true 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:46.584 Cannot find device "nvmf_tgt_br2" 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # true 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:46.584 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:46.584 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:14:46.584 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:46.585 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:46.585 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:46.585 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:46.585 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:46.585 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:46.585 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:46.585 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:46.585 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:46.585 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:46.585 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:46.844 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:46.844 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:46.844 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:46.844 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:46.844 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:46.844 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:46.844 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:46.844 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:46.844 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:46.844 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:46.844 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:46.844 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:46.844 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:46.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:46.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:14:46.844 00:14:46.844 --- 10.0.0.2 ping statistics --- 00:14:46.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.844 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:14:46.844 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:46.844 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:46.844 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:14:46.844 00:14:46.844 --- 10.0.0.3 ping statistics --- 00:14:46.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.844 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:14:46.844 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:46.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:46.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:14:46.844 00:14:46.844 --- 10.0.0.1 ping statistics --- 00:14:46.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.844 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:14:46.844 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:46.844 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:14:46.844 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:46.844 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:46.844 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:46.844 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:46.844 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:46.844 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:46.844 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:46.844 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:14:46.844 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:46.844 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:46.844 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:46.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.844 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=75152 00:14:46.844 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 75152 00:14:46.844 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 75152 ']' 00:14:46.844 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:46.844 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.844 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:46.844 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.844 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:46.844 10:53:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:46.844 [2024-07-25 10:53:16.518305] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:46.844 [2024-07-25 10:53:16.519016] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:47.103 [2024-07-25 10:53:16.657402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:47.103 [2024-07-25 10:53:16.779259] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:47.103 [2024-07-25 10:53:16.779600] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:47.103 [2024-07-25 10:53:16.779760] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:47.103 [2024-07-25 10:53:16.779815] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:47.103 [2024-07-25 10:53:16.779928] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:47.103 [2024-07-25 10:53:16.780156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:47.103 [2024-07-25 10:53:16.780338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:47.103 [2024-07-25 10:53:16.780339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:47.103 [2024-07-25 10:53:16.836309] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:48.039 10:53:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:48.039 10:53:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:14:48.039 10:53:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:48.039 10:53:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:48.039 10:53:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:48.039 10:53:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:48.039 10:53:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:48.299 [2024-07-25 10:53:17.788525] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:48.299 10:53:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:14:48.558 Malloc0 00:14:48.558 10:53:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:48.817 10:53:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:49.075 10:53:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:49.333 [2024-07-25 10:53:18.813746] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:49.333 10:53:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:49.333 [2024-07-25 10:53:19.045497] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:49.333 10:53:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:14:49.592 [2024-07-25 10:53:19.273682] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:14:49.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:49.592 10:53:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75215 00:14:49.592 10:53:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:49.592 10:53:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75215 /var/tmp/bdevperf.sock 00:14:49.592 10:53:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:14:49.592 10:53:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 75215 ']' 00:14:49.592 10:53:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:49.592 10:53:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:49.592 10:53:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:49.592 10:53:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:49.592 10:53:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:50.967 10:53:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:50.967 10:53:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:14:50.967 10:53:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:50.967 NVMe0n1 00:14:50.967 10:53:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:51.534 00:14:51.534 10:53:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75244 00:14:51.534 10:53:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:51.534 10:53:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:14:52.470 10:53:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:52.729 10:53:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:14:56.015 10:53:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:56.015 00:14:56.015 10:53:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:56.273 10:53:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:14:59.556 10:53:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:59.556 [2024-07-25 10:53:29.166509] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:59.556 10:53:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:15:00.493 10:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:00.751 10:53:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75244 00:15:07.391 0 00:15:07.391 10:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 75215 00:15:07.391 10:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 75215 ']' 00:15:07.391 10:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 75215 00:15:07.391 10:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:15:07.391 10:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:07.391 10:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75215 00:15:07.391 killing process with pid 75215 00:15:07.391 10:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:07.391 10:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:07.391 10:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75215' 00:15:07.391 10:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 75215 00:15:07.391 10:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 75215 00:15:07.391 10:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:07.391 [2024-07-25 10:53:19.341117] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:07.391 [2024-07-25 10:53:19.341249] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75215 ] 00:15:07.391 [2024-07-25 10:53:19.479319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.391 [2024-07-25 10:53:19.593279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.391 [2024-07-25 10:53:19.648182] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:07.391 Running I/O for 15 seconds... 00:15:07.391 [2024-07-25 10:53:22.285171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.391 [2024-07-25 10:53:22.285252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.391 [2024-07-25 10:53:22.285286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:64800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.391 [2024-07-25 10:53:22.285303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.391 [2024-07-25 10:53:22.285320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:64808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.391 [2024-07-25 10:53:22.285334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.391 [2024-07-25 10:53:22.285349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.391 [2024-07-25 10:53:22.285363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.391 [2024-07-25 10:53:22.285378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.391 [2024-07-25 10:53:22.285393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.391 [2024-07-25 10:53:22.285409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.391 [2024-07-25 10:53:22.285423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.391 [2024-07-25 10:53:22.285438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.391 [2024-07-25 10:53:22.285452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.391 [2024-07-25 10:53:22.285468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.391 [2024-07-25 10:53:22.285481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.391 [2024-07-25 10:53:22.285497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.391 [2024-07-25 10:53:22.285510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.391 [2024-07-25 10:53:22.285525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:64864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.391 [2024-07-25 10:53:22.285539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.391 [2024-07-25 10:53:22.285555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:64872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.391 [2024-07-25 10:53:22.285599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.391 [2024-07-25 10:53:22.285616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.391 [2024-07-25 10:53:22.285632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.391 [2024-07-25 10:53:22.285647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.391 [2024-07-25 10:53:22.285661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.391 [2024-07-25 10:53:22.285677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.391 [2024-07-25 10:53:22.285690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.391 [2024-07-25 10:53:22.285706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.391 [2024-07-25 10:53:22.285720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.391 [2024-07-25 10:53:22.285736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.391 [2024-07-25 10:53:22.285759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.391 [2024-07-25 10:53:22.285776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:64920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.391 [2024-07-25 10:53:22.285790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.391 [2024-07-25 10:53:22.285806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.391 [2024-07-25 10:53:22.285819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.391 [2024-07-25 10:53:22.285834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.391 [2024-07-25 10:53:22.285848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.391 [2024-07-25 10:53:22.285879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.391 [2024-07-25 10:53:22.285893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.391 [2024-07-25 10:53:22.285908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.391 [2024-07-25 10:53:22.285922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.391 [2024-07-25 10:53:22.285937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.391 [2024-07-25 10:53:22.285950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.391 [2024-07-25 10:53:22.285966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.391 [2024-07-25 10:53:22.285980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.391 [2024-07-25 10:53:22.286025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.392 [2024-07-25 10:53:22.286042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.392 [2024-07-25 10:53:22.286057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.392 [2024-07-25 10:53:22.286070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.392 [2024-07-25 10:53:22.286086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.392 [2024-07-25 10:53:22.286107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.392 [2024-07-25 10:53:22.286123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.392 [2024-07-25 10:53:22.286136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.392 [2024-07-25 10:53:22.286151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.392 [2024-07-25 10:53:22.286165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.392 [2024-07-25 10:53:22.286180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.392 [2024-07-25 10:53:22.286194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.392 [2024-07-25 10:53:22.286209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.392 [2024-07-25 10:53:22.286222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.392 [2024-07-25 10:53:22.286238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.392 [2024-07-25 10:53:22.286251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.392 [2024-07-25 10:53:22.286267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.392 [2024-07-25 10:53:22.286287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.392 [2024-07-25 10:53:22.286304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.392 [2024-07-25 10:53:22.286318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.392 [2024-07-25 10:53:22.286334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.392 [2024-07-25 10:53:22.286347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.392 [2024-07-25 10:53:22.286363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.392 [2024-07-25 10:53:22.286376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.392 [2024-07-25 10:53:22.286392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.392 [2024-07-25 10:53:22.286405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.392 [2024-07-25 10:53:22.286429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.392 [2024-07-25 10:53:22.286443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.392 [2024-07-25 10:53:22.286465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.392 [2024-07-25 10:53:22.286478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.392 [2024-07-25 10:53:22.286494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.392 [2024-07-25 10:53:22.286507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.392 [2024-07-25 10:53:22.286523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.392 [2024-07-25 10:53:22.286536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.392 [2024-07-25 10:53:22.286552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.392 [2024-07-25 10:53:22.286565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.392 [2024-07-25 10:53:22.286581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.392 [2024-07-25 10:53:22.286594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.392 [2024-07-25 10:53:22.286610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.392 [2024-07-25 10:53:22.286629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.392 [2024-07-25 10:53:22.286645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.392 [2024-07-25 10:53:22.286659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.392 [2024-07-25 10:53:22.286674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.392 [2024-07-25 10:53:22.286688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.392 [2024-07-25 10:53:22.286703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.392 [2024-07-25 10:53:22.286717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.392 [2024-07-25 10:53:22.286732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.392 [2024-07-25 10:53:22.286746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.392 [2024-07-25 10:53:22.286761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.392 [2024-07-25 10:53:22.286781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.392 [2024-07-25 10:53:22.286796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.392 [2024-07-25 10:53:22.286817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.392 [2024-07-25 10:53:22.286834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.392 [2024-07-25 10:53:22.286847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.392 [2024-07-25 10:53:22.286875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.392 [2024-07-25 10:53:22.286889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.392 [2024-07-25 10:53:22.286905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.392 [2024-07-25 10:53:22.286919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.392 [2024-07-25 10:53:22.286934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.392 [2024-07-25 10:53:22.286949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.392 [2024-07-25 10:53:22.286964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.392 [2024-07-25 10:53:22.286977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.392 [2024-07-25 10:53:22.286993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.392 [2024-07-25 10:53:22.287006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.392 [2024-07-25 10:53:22.287021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.392 [2024-07-25 10:53:22.287035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.392 [2024-07-25 10:53:22.287050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.392 [2024-07-25 10:53:22.287064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.392 [2024-07-25 10:53:22.287080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.392 [2024-07-25 10:53:22.287094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.392 [2024-07-25 10:53:22.287109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.392 [2024-07-25 10:53:22.287122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.392 [2024-07-25 10:53:22.287138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.392 [2024-07-25 10:53:22.287162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.392 [2024-07-25 10:53:22.287178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.392 [2024-07-25 10:53:22.287191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.392 [2024-07-25 10:53:22.287214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.392 [2024-07-25 10:53:22.287228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.392 [2024-07-25 10:53:22.287244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.393 [2024-07-25 10:53:22.287257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.393 [2024-07-25 10:53:22.287273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.393 [2024-07-25 10:53:22.287301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.393 [2024-07-25 10:53:22.287317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.393 [2024-07-25 10:53:22.287331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.393 [2024-07-25 10:53:22.287347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.393 [2024-07-25 10:53:22.287361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.393 [2024-07-25 10:53:22.287376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.393 [2024-07-25 10:53:22.287390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.393 [2024-07-25 10:53:22.287405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.393 [2024-07-25 10:53:22.287419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.393 [2024-07-25 10:53:22.287434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.393 [2024-07-25 10:53:22.287447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.393 [2024-07-25 10:53:22.287463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.393 [2024-07-25 10:53:22.287476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.393 [2024-07-25 10:53:22.287492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.393 [2024-07-25 10:53:22.287505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.393 [2024-07-25 10:53:22.287521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.393 [2024-07-25 10:53:22.287534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.393 [2024-07-25 10:53:22.287550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.393 [2024-07-25 10:53:22.287563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.393 [2024-07-25 10:53:22.287579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.393 [2024-07-25 10:53:22.287592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.393 [2024-07-25 10:53:22.287614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.393 [2024-07-25 10:53:22.287633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.393 [2024-07-25 10:53:22.287649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.393 [2024-07-25 10:53:22.287662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.393 [2024-07-25 10:53:22.287678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.393 [2024-07-25 10:53:22.287691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.393 [2024-07-25 10:53:22.287707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.393 [2024-07-25 10:53:22.287720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.393 [2024-07-25 10:53:22.287736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.393 [2024-07-25 10:53:22.287749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.393 [2024-07-25 10:53:22.287764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.393 [2024-07-25 10:53:22.287784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.393 [2024-07-25 10:53:22.287801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.393 [2024-07-25 10:53:22.287816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.393 [2024-07-25 10:53:22.287831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.393 [2024-07-25 10:53:22.287845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.393 [2024-07-25 10:53:22.287874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.393 [2024-07-25 10:53:22.287889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.393 [2024-07-25 10:53:22.287905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.393 [2024-07-25 10:53:22.287920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.393 [2024-07-25 10:53:22.287936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.393 [2024-07-25 10:53:22.287949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.393 [2024-07-25 10:53:22.287965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.393 [2024-07-25 10:53:22.287978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.393 [2024-07-25 10:53:22.287994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.393 [2024-07-25 10:53:22.288020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.393 [2024-07-25 10:53:22.288036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.393 [2024-07-25 10:53:22.288050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.393 [2024-07-25 10:53:22.288065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.393 [2024-07-25 10:53:22.288083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.393 [2024-07-25 10:53:22.288099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.393 [2024-07-25 10:53:22.288112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.393 [2024-07-25 10:53:22.288128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.393 [2024-07-25 10:53:22.288141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.393 [2024-07-25 10:53:22.288157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.393 [2024-07-25 10:53:22.288170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.393 [2024-07-25 10:53:22.288186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.393 [2024-07-25 10:53:22.288199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.393 [2024-07-25 10:53:22.288215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.393 [2024-07-25 10:53:22.288229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.393 [2024-07-25 10:53:22.288244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.393 [2024-07-25 10:53:22.288258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.393 [2024-07-25 10:53:22.288273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.393 [2024-07-25 10:53:22.288294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.393 [2024-07-25 10:53:22.288310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.393 [2024-07-25 10:53:22.288324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.393 [2024-07-25 10:53:22.288339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.393 [2024-07-25 10:53:22.288353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.393 [2024-07-25 10:53:22.288368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.393 [2024-07-25 10:53:22.288382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.393 [2024-07-25 10:53:22.288403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.393 [2024-07-25 10:53:22.288418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.393 [2024-07-25 10:53:22.288434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.393 [2024-07-25 10:53:22.288448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.393 [2024-07-25 10:53:22.288463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.393 [2024-07-25 10:53:22.288477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.394 [2024-07-25 10:53:22.288493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.394 [2024-07-25 10:53:22.288506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.394 [2024-07-25 10:53:22.288521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.394 [2024-07-25 10:53:22.288535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.394 [2024-07-25 10:53:22.288550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.394 [2024-07-25 10:53:22.288564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.394 [2024-07-25 10:53:22.288584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.394 [2024-07-25 10:53:22.288597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.394 [2024-07-25 10:53:22.288612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.394 [2024-07-25 10:53:22.288626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.394 [2024-07-25 10:53:22.288642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.394 [2024-07-25 10:53:22.288655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.394 [2024-07-25 10:53:22.288671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.394 [2024-07-25 10:53:22.288684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.394 [2024-07-25 10:53:22.288700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.394 [2024-07-25 10:53:22.288713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.394 [2024-07-25 10:53:22.288729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.394 [2024-07-25 10:53:22.288742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.394 [2024-07-25 10:53:22.288758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:64680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.394 [2024-07-25 10:53:22.288784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.394 [2024-07-25 10:53:22.288801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.394 [2024-07-25 10:53:22.288815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.394 [2024-07-25 10:53:22.288830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.394 [2024-07-25 10:53:22.288844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.394 [2024-07-25 10:53:22.288871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:64704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.394 [2024-07-25 10:53:22.288886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.394 [2024-07-25 10:53:22.288902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.394 [2024-07-25 10:53:22.288916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.394 [2024-07-25 10:53:22.288932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.394 [2024-07-25 10:53:22.288945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.394 [2024-07-25 10:53:22.288961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.394 [2024-07-25 10:53:22.288975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.394 [2024-07-25 10:53:22.288990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.394 [2024-07-25 10:53:22.289004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.394 [2024-07-25 10:53:22.289019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.394 [2024-07-25 10:53:22.289032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.394 [2024-07-25 10:53:22.289048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.394 [2024-07-25 10:53:22.289061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.394 [2024-07-25 10:53:22.289077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.394 [2024-07-25 10:53:22.289090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.394 [2024-07-25 10:53:22.289106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.394 [2024-07-25 10:53:22.289119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.394 [2024-07-25 10:53:22.289135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.394 [2024-07-25 10:53:22.289148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.394 [2024-07-25 10:53:22.289164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:64784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.394 [2024-07-25 10:53:22.289184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.394 [2024-07-25 10:53:22.289201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.394 [2024-07-25 10:53:22.289215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.394 [2024-07-25 10:53:22.289230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.394 [2024-07-25 10:53:22.289244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.394 [2024-07-25 10:53:22.289260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2308830 is same with the state(5) to be set 00:15:07.394 [2024-07-25 10:53:22.289282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:07.394 [2024-07-25 10:53:22.289294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:07.394 [2024-07-25 10:53:22.289305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65688 len:8 PRP1 0x0 PRP2 0x0 00:15:07.394 [2024-07-25 10:53:22.289318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.394 [2024-07-25 10:53:22.289378] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2308830 was disconnected and freed. reset controller. 00:15:07.394 [2024-07-25 10:53:22.289398] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:15:07.394 [2024-07-25 10:53:22.289456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:07.394 [2024-07-25 10:53:22.289477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.394 [2024-07-25 10:53:22.289506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:07.394 [2024-07-25 10:53:22.289521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.394 [2024-07-25 10:53:22.289536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:07.394 [2024-07-25 10:53:22.289549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.394 [2024-07-25 10:53:22.289564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:07.394 [2024-07-25 10:53:22.289577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.394 [2024-07-25 10:53:22.289590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:07.394 [2024-07-25 10:53:22.289638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2299570 (9): Bad file descriptor 00:15:07.394 [2024-07-25 10:53:22.293468] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:07.394 [2024-07-25 10:53:22.335230] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:07.394 [2024-07-25 10:53:25.882607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:78280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.394 [2024-07-25 10:53:25.882710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.394 [2024-07-25 10:53:25.882742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.394 [2024-07-25 10:53:25.882790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.394 [2024-07-25 10:53:25.882809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.394 [2024-07-25 10:53:25.882823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.394 [2024-07-25 10:53:25.882839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.394 [2024-07-25 10:53:25.882866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.394 [2024-07-25 10:53:25.882885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.394 [2024-07-25 10:53:25.882899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.394 [2024-07-25 10:53:25.882914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.394 [2024-07-25 10:53:25.882928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.395 [2024-07-25 10:53:25.882943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.395 [2024-07-25 10:53:25.882957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.395 [2024-07-25 10:53:25.882982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.395 [2024-07-25 10:53:25.882996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.395 [2024-07-25 10:53:25.883012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.395 [2024-07-25 10:53:25.883025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.395 [2024-07-25 10:53:25.883041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:78352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.395 [2024-07-25 10:53:25.883055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.395 [2024-07-25 10:53:25.883070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.395 [2024-07-25 10:53:25.883083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.395 [2024-07-25 10:53:25.883099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.395 [2024-07-25 10:53:25.883112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.395 [2024-07-25 10:53:25.883129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.395 [2024-07-25 10:53:25.883142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.395 [2024-07-25 10:53:25.883158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.395 [2024-07-25 10:53:25.883172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.395 [2024-07-25 10:53:25.883196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:78392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.395 [2024-07-25 10:53:25.883211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.395 [2024-07-25 10:53:25.883226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.395 [2024-07-25 10:53:25.883240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.395 [2024-07-25 10:53:25.883255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:77832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.395 [2024-07-25 10:53:25.883269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.395 [2024-07-25 10:53:25.883289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:77840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.395 [2024-07-25 10:53:25.883312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.395 [2024-07-25 10:53:25.883327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:77848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.395 [2024-07-25 10:53:25.883341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.395 [2024-07-25 10:53:25.883357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:77856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.395 [2024-07-25 10:53:25.883370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.395 [2024-07-25 10:53:25.883386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:77864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.395 [2024-07-25 10:53:25.883399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.395 [2024-07-25 10:53:25.883415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:77872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.395 [2024-07-25 10:53:25.883428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.395 [2024-07-25 10:53:25.883444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:77880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.395 [2024-07-25 10:53:25.883457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.395 [2024-07-25 10:53:25.883473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:77888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.395 [2024-07-25 10:53:25.883486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.395 [2024-07-25 10:53:25.883502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.395 [2024-07-25 10:53:25.883516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.395 [2024-07-25 10:53:25.883531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.395 [2024-07-25 10:53:25.883545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.395 [2024-07-25 10:53:25.883560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.395 [2024-07-25 10:53:25.883582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.395 [2024-07-25 10:53:25.883598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:77920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.395 [2024-07-25 10:53:25.883613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.395 [2024-07-25 10:53:25.883628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.395 [2024-07-25 10:53:25.883642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.395 [2024-07-25 10:53:25.883661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:77936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.395 [2024-07-25 10:53:25.883674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.395 [2024-07-25 10:53:25.883690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.395 [2024-07-25 10:53:25.883704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.395 [2024-07-25 10:53:25.883719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.395 [2024-07-25 10:53:25.883732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.395 [2024-07-25 10:53:25.883750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.395 [2024-07-25 10:53:25.883764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.395 [2024-07-25 10:53:25.883781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.395 [2024-07-25 10:53:25.883795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.395 [2024-07-25 10:53:25.883811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.395 [2024-07-25 10:53:25.883825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.395 [2024-07-25 10:53:25.883841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.395 [2024-07-25 10:53:25.883866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.395 [2024-07-25 10:53:25.883883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.395 [2024-07-25 10:53:25.883897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.395 [2024-07-25 10:53:25.883913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.395 [2024-07-25 10:53:25.883926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.396 [2024-07-25 10:53:25.883942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.396 [2024-07-25 10:53:25.883956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.396 [2024-07-25 10:53:25.883989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.396 [2024-07-25 10:53:25.884004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.396 [2024-07-25 10:53:25.884019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.396 [2024-07-25 10:53:25.884041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.396 [2024-07-25 10:53:25.884056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.396 [2024-07-25 10:53:25.884070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.396 [2024-07-25 10:53:25.884086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.396 [2024-07-25 10:53:25.884100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.396 [2024-07-25 10:53:25.884115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.396 [2024-07-25 10:53:25.884129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.396 [2024-07-25 10:53:25.884144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.396 [2024-07-25 10:53:25.884158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.396 [2024-07-25 10:53:25.884173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.396 [2024-07-25 10:53:25.884187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.396 [2024-07-25 10:53:25.884203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.396 [2024-07-25 10:53:25.884216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.396 [2024-07-25 10:53:25.884232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.396 [2024-07-25 10:53:25.884246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.396 [2024-07-25 10:53:25.884262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.396 [2024-07-25 10:53:25.884276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.396 [2024-07-25 10:53:25.884292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.396 [2024-07-25 10:53:25.884305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.396 [2024-07-25 10:53:25.884321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.396 [2024-07-25 10:53:25.884335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.396 [2024-07-25 10:53:25.884351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.396 [2024-07-25 10:53:25.884364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.396 [2024-07-25 10:53:25.884387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.396 [2024-07-25 10:53:25.884401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.396 [2024-07-25 10:53:25.884417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.396 [2024-07-25 10:53:25.884431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.396 [2024-07-25 10:53:25.884446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.396 [2024-07-25 10:53:25.884460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.396 [2024-07-25 10:53:25.884475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.396 [2024-07-25 10:53:25.884489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.396 [2024-07-25 10:53:25.884505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.396 [2024-07-25 10:53:25.884518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.396 [2024-07-25 10:53:25.884534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.396 [2024-07-25 10:53:25.884548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.396 [2024-07-25 10:53:25.884564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.396 [2024-07-25 10:53:25.884578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.396 [2024-07-25 10:53:25.884594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.396 [2024-07-25 10:53:25.884607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.396 [2024-07-25 10:53:25.884623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.396 [2024-07-25 10:53:25.884636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.396 [2024-07-25 10:53:25.884652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.396 [2024-07-25 10:53:25.884666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.396 [2024-07-25 10:53:25.884681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.396 [2024-07-25 10:53:25.884695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.396 [2024-07-25 10:53:25.884710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.396 [2024-07-25 10:53:25.884725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.396 [2024-07-25 10:53:25.884741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.396 [2024-07-25 10:53:25.884761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.396 [2024-07-25 10:53:25.884777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.396 [2024-07-25 10:53:25.884791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.396 [2024-07-25 10:53:25.884807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.396 [2024-07-25 10:53:25.884821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.396 [2024-07-25 10:53:25.884837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.396 [2024-07-25 10:53:25.884860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.396 [2024-07-25 10:53:25.884878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.396 [2024-07-25 10:53:25.884892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.396 [2024-07-25 10:53:25.884908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.396 [2024-07-25 10:53:25.884922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.396 [2024-07-25 10:53:25.884938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.396 [2024-07-25 10:53:25.884951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.396 [2024-07-25 10:53:25.884967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.396 [2024-07-25 10:53:25.884980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.396 [2024-07-25 10:53:25.884996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.396 [2024-07-25 10:53:25.885009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.396 [2024-07-25 10:53:25.885026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.396 [2024-07-25 10:53:25.885040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.396 [2024-07-25 10:53:25.885055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.396 [2024-07-25 10:53:25.885069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.396 [2024-07-25 10:53:25.885085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.396 [2024-07-25 10:53:25.885098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.396 [2024-07-25 10:53:25.885114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.397 [2024-07-25 10:53:25.885127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.397 [2024-07-25 10:53:25.885150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.397 [2024-07-25 10:53:25.885164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.397 [2024-07-25 10:53:25.885180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.397 [2024-07-25 10:53:25.885194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.397 [2024-07-25 10:53:25.885209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.397 [2024-07-25 10:53:25.885224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.397 [2024-07-25 10:53:25.885240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.397 [2024-07-25 10:53:25.885254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.397 [2024-07-25 10:53:25.885269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.397 [2024-07-25 10:53:25.885283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.397 [2024-07-25 10:53:25.885299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.397 [2024-07-25 10:53:25.885312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.397 [2024-07-25 10:53:25.885327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.397 [2024-07-25 10:53:25.885341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.397 [2024-07-25 10:53:25.885356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.397 [2024-07-25 10:53:25.885370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.397 [2024-07-25 10:53:25.885386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.397 [2024-07-25 10:53:25.885399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.397 [2024-07-25 10:53:25.885414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.397 [2024-07-25 10:53:25.885428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.397 [2024-07-25 10:53:25.885443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.397 [2024-07-25 10:53:25.885457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.397 [2024-07-25 10:53:25.885473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.397 [2024-07-25 10:53:25.885486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.397 [2024-07-25 10:53:25.885502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.397 [2024-07-25 10:53:25.885521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.397 [2024-07-25 10:53:25.885539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.397 [2024-07-25 10:53:25.885552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.397 [2024-07-25 10:53:25.885568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.397 [2024-07-25 10:53:25.885581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.397 [2024-07-25 10:53:25.885597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.397 [2024-07-25 10:53:25.885610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.397 [2024-07-25 10:53:25.885626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.397 [2024-07-25 10:53:25.885639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.397 [2024-07-25 10:53:25.885655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.397 [2024-07-25 10:53:25.885669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.397 [2024-07-25 10:53:25.885685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.397 [2024-07-25 10:53:25.885699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.397 [2024-07-25 10:53:25.885715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.397 [2024-07-25 10:53:25.885728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.397 [2024-07-25 10:53:25.885743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.397 [2024-07-25 10:53:25.885757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.397 [2024-07-25 10:53:25.885773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.397 [2024-07-25 10:53:25.885787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.397 [2024-07-25 10:53:25.885802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.397 [2024-07-25 10:53:25.885815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.397 [2024-07-25 10:53:25.885831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.397 [2024-07-25 10:53:25.885845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.397 [2024-07-25 10:53:25.885874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.397 [2024-07-25 10:53:25.885889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.397 [2024-07-25 10:53:25.885904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.397 [2024-07-25 10:53:25.885925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.397 [2024-07-25 10:53:25.885960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.397 [2024-07-25 10:53:25.885975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.397 [2024-07-25 10:53:25.885990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.397 [2024-07-25 10:53:25.886015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.397 [2024-07-25 10:53:25.886031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.397 [2024-07-25 10:53:25.886045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.397 [2024-07-25 10:53:25.886061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.397 [2024-07-25 10:53:25.886074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.397 [2024-07-25 10:53:25.886090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.397 [2024-07-25 10:53:25.886104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.397 [2024-07-25 10:53:25.886119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.397 [2024-07-25 10:53:25.886133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.397 [2024-07-25 10:53:25.886149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.397 [2024-07-25 10:53:25.886163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.397 [2024-07-25 10:53:25.886178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.397 [2024-07-25 10:53:25.886191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.397 [2024-07-25 10:53:25.886207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.397 [2024-07-25 10:53:25.886222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.397 [2024-07-25 10:53:25.886238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.397 [2024-07-25 10:53:25.886251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.397 [2024-07-25 10:53:25.886267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.397 [2024-07-25 10:53:25.886281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.397 [2024-07-25 10:53:25.886297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.397 [2024-07-25 10:53:25.886310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.397 [2024-07-25 10:53:25.886336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.397 [2024-07-25 10:53:25.886350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.398 [2024-07-25 10:53:25.886366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.398 [2024-07-25 10:53:25.886379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.398 [2024-07-25 10:53:25.886395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.398 [2024-07-25 10:53:25.886409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.398 [2024-07-25 10:53:25.886424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.398 [2024-07-25 10:53:25.886437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.398 [2024-07-25 10:53:25.886459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2308d50 is same with the state(5) to be set 00:15:07.398 [2024-07-25 10:53:25.886487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:07.398 [2024-07-25 10:53:25.886506] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:07.398 [2024-07-25 10:53:25.886517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78272 len:8 PRP1 0x0 PRP2 0x0 00:15:07.398 [2024-07-25 10:53:25.886531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.398 [2024-07-25 10:53:25.886545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:07.398 [2024-07-25 10:53:25.886555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:07.398 [2024-07-25 10:53:25.886566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78792 len:8 PRP1 0x0 PRP2 0x0 00:15:07.398 [2024-07-25 10:53:25.886580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.398 [2024-07-25 10:53:25.886593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:07.398 [2024-07-25 10:53:25.886603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:07.398 [2024-07-25 10:53:25.886613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78800 len:8 PRP1 0x0 PRP2 0x0 00:15:07.398 [2024-07-25 10:53:25.886627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.398 [2024-07-25 10:53:25.886641] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:07.398 [2024-07-25 10:53:25.886650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:07.398 [2024-07-25 10:53:25.886661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78808 len:8 PRP1 0x0 PRP2 0x0 00:15:07.398 [2024-07-25 10:53:25.886674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.398 [2024-07-25 10:53:25.886688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:07.398 [2024-07-25 10:53:25.886699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:07.398 [2024-07-25 10:53:25.886709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78816 len:8 PRP1 0x0 PRP2 0x0 00:15:07.398 [2024-07-25 10:53:25.886723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.398 [2024-07-25 10:53:25.886744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:07.398 [2024-07-25 10:53:25.886754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:07.398 [2024-07-25 10:53:25.886764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78824 len:8 PRP1 0x0 PRP2 0x0 00:15:07.398 [2024-07-25 10:53:25.886778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.398 [2024-07-25 10:53:25.886791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:07.398 [2024-07-25 10:53:25.886801] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:07.398 [2024-07-25 10:53:25.886811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78832 len:8 PRP1 0x0 PRP2 0x0 00:15:07.398 [2024-07-25 10:53:25.886825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.398 [2024-07-25 10:53:25.886838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:07.398 [2024-07-25 10:53:25.886848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:07.398 [2024-07-25 10:53:25.886870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78840 len:8 PRP1 0x0 PRP2 0x0 00:15:07.398 [2024-07-25 10:53:25.886886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.398 [2024-07-25 10:53:25.886900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:07.398 [2024-07-25 10:53:25.886911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:07.398 [2024-07-25 10:53:25.886921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78848 len:8 PRP1 0x0 PRP2 0x0 00:15:07.398 [2024-07-25 10:53:25.886934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.398 [2024-07-25 10:53:25.886996] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2308d50 was disconnected and freed. reset controller. 00:15:07.398 [2024-07-25 10:53:25.887016] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:15:07.398 [2024-07-25 10:53:25.887078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:07.398 [2024-07-25 10:53:25.887100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.398 [2024-07-25 10:53:25.887117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:07.398 [2024-07-25 10:53:25.887131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.398 [2024-07-25 10:53:25.887145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:07.398 [2024-07-25 10:53:25.887158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.398 [2024-07-25 10:53:25.887171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:07.398 [2024-07-25 10:53:25.887185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.398 [2024-07-25 10:53:25.887199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:07.398 [2024-07-25 10:53:25.891041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:07.398 [2024-07-25 10:53:25.891084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2299570 (9): Bad file descriptor 00:15:07.398 [2024-07-25 10:53:25.932719] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:07.398 [2024-07-25 10:53:30.455705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:07.398 [2024-07-25 10:53:30.455793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.398 [2024-07-25 10:53:30.455812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:07.398 [2024-07-25 10:53:30.455825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.398 [2024-07-25 10:53:30.455838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:07.398 [2024-07-25 10:53:30.455862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.398 [2024-07-25 10:53:30.455878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:07.398 [2024-07-25 10:53:30.455890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.398 [2024-07-25 10:53:30.455902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299570 is same with the state(5) to be set 00:15:07.398 [2024-07-25 10:53:30.457036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:31344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.398 [2024-07-25 10:53:30.457070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.398 [2024-07-25 10:53:30.457096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:31352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.398 [2024-07-25 10:53:30.457110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.398 [2024-07-25 10:53:30.457124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:31360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.398 [2024-07-25 10:53:30.457137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.398 [2024-07-25 10:53:30.457167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:31368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.398 [2024-07-25 10:53:30.457179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.398 [2024-07-25 10:53:30.457208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:31376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.398 [2024-07-25 10:53:30.457220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.398 [2024-07-25 10:53:30.457235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.398 [2024-07-25 10:53:30.457248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.398 [2024-07-25 10:53:30.457261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:31392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.398 [2024-07-25 10:53:30.457273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.398 [2024-07-25 10:53:30.457287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:31400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.398 [2024-07-25 10:53:30.457328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.398 [2024-07-25 10:53:30.457345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:30832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.398 [2024-07-25 10:53:30.457357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.398 [2024-07-25 10:53:30.457372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.399 [2024-07-25 10:53:30.457384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.399 [2024-07-25 10:53:30.457398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:30848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.399 [2024-07-25 10:53:30.457411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.399 [2024-07-25 10:53:30.457425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:30856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.399 [2024-07-25 10:53:30.457437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.399 [2024-07-25 10:53:30.457450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.399 [2024-07-25 10:53:30.457462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.399 [2024-07-25 10:53:30.457477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:30872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.399 [2024-07-25 10:53:30.457489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.399 [2024-07-25 10:53:30.457504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:30880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.399 [2024-07-25 10:53:30.457516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.399 [2024-07-25 10:53:30.457530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:30888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.399 [2024-07-25 10:53:30.457543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.399 [2024-07-25 10:53:30.457558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.399 [2024-07-25 10:53:30.457570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.399 [2024-07-25 10:53:30.457584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:30904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.399 [2024-07-25 10:53:30.457596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.399 [2024-07-25 10:53:30.457610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:30912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.399 [2024-07-25 10:53:30.457622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.399 [2024-07-25 10:53:30.457636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:30920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.399 [2024-07-25 10:53:30.457649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.399 [2024-07-25 10:53:30.457663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.399 [2024-07-25 10:53:30.457768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.399 [2024-07-25 10:53:30.457785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.399 [2024-07-25 10:53:30.457798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.399 [2024-07-25 10:53:30.457813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.399 [2024-07-25 10:53:30.457825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.399 [2024-07-25 10:53:30.457840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:30952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.399 [2024-07-25 10:53:30.457853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.399 [2024-07-25 10:53:30.457868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:30960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.399 [2024-07-25 10:53:30.457894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.399 [2024-07-25 10:53:30.457912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:30968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.399 [2024-07-25 10:53:30.457925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.399 [2024-07-25 10:53:30.457940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.399 [2024-07-25 10:53:30.457953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.399 [2024-07-25 10:53:30.457967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.399 [2024-07-25 10:53:30.457980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.399 [2024-07-25 10:53:30.457994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:30992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.399 [2024-07-25 10:53:30.458034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.399 [2024-07-25 10:53:30.458052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:31000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.399 [2024-07-25 10:53:30.458066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.399 [2024-07-25 10:53:30.458082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:31008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.399 [2024-07-25 10:53:30.458095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.399 [2024-07-25 10:53:30.458111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.399 [2024-07-25 10:53:30.458124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.399 [2024-07-25 10:53:30.458140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:31408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.399 [2024-07-25 10:53:30.458153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.399 [2024-07-25 10:53:30.458179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:31416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.399 [2024-07-25 10:53:30.458193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.399 [2024-07-25 10:53:30.458209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:31424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.399 [2024-07-25 10:53:30.458222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.399 [2024-07-25 10:53:30.458238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:31432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.399 [2024-07-25 10:53:30.458252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.399 [2024-07-25 10:53:30.458267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:31440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.399 [2024-07-25 10:53:30.458281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.399 [2024-07-25 10:53:30.458296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:31448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.399 [2024-07-25 10:53:30.458310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.399 [2024-07-25 10:53:30.458340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:31456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.399 [2024-07-25 10:53:30.458369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.399 [2024-07-25 10:53:30.458383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:31464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.399 [2024-07-25 10:53:30.458395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.399 [2024-07-25 10:53:30.458411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:31472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.399 [2024-07-25 10:53:30.458424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.399 [2024-07-25 10:53:30.458453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:31480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.399 [2024-07-25 10:53:30.458465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.399 [2024-07-25 10:53:30.458479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:31488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.399 [2024-07-25 10:53:30.458491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.399 [2024-07-25 10:53:30.458505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:31496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.399 [2024-07-25 10:53:30.458517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.399 [2024-07-25 10:53:30.458531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:31504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.399 [2024-07-25 10:53:30.458543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.399 [2024-07-25 10:53:30.458557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.399 [2024-07-25 10:53:30.458576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.399 [2024-07-25 10:53:30.458591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:31520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.399 [2024-07-25 10:53:30.458603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.399 [2024-07-25 10:53:30.458617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:31528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.399 [2024-07-25 10:53:30.458629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.399 [2024-07-25 10:53:30.458643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:31536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.399 [2024-07-25 10:53:30.458656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.399 [2024-07-25 10:53:30.458670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:31544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.400 [2024-07-25 10:53:30.458682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.400 [2024-07-25 10:53:30.458696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.400 [2024-07-25 10:53:30.458708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.400 [2024-07-25 10:53:30.458721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:31560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.400 [2024-07-25 10:53:30.458733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.400 [2024-07-25 10:53:30.458747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.400 [2024-07-25 10:53:30.458760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.400 [2024-07-25 10:53:30.458774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.400 [2024-07-25 10:53:30.458786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.400 [2024-07-25 10:53:30.458799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:31584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.400 [2024-07-25 10:53:30.458812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.400 [2024-07-25 10:53:30.458825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:31592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.400 [2024-07-25 10:53:30.458837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.400 [2024-07-25 10:53:30.458853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:31024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.400 [2024-07-25 10:53:30.458865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.400 [2024-07-25 10:53:30.458880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.400 [2024-07-25 10:53:30.458893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.400 [2024-07-25 10:53:30.458925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.400 [2024-07-25 10:53:30.458941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.400 [2024-07-25 10:53:30.458955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:31048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.400 [2024-07-25 10:53:30.458967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.400 [2024-07-25 10:53:30.458981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:31056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.400 [2024-07-25 10:53:30.458994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.400 [2024-07-25 10:53:30.459008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:31064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.400 [2024-07-25 10:53:30.459020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.400 [2024-07-25 10:53:30.459033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.400 [2024-07-25 10:53:30.459045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.400 [2024-07-25 10:53:30.459059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:31080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.400 [2024-07-25 10:53:30.459071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.400 [2024-07-25 10:53:30.459084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:31600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.400 [2024-07-25 10:53:30.459097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.400 [2024-07-25 10:53:30.459110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:31608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.400 [2024-07-25 10:53:30.459123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.400 [2024-07-25 10:53:30.459137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:31616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.400 [2024-07-25 10:53:30.459149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.400 [2024-07-25 10:53:30.459162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:31624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.400 [2024-07-25 10:53:30.459174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.400 [2024-07-25 10:53:30.459189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:31632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.400 [2024-07-25 10:53:30.459200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.400 [2024-07-25 10:53:30.459214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:31640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.400 [2024-07-25 10:53:30.459226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.400 [2024-07-25 10:53:30.459240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.400 [2024-07-25 10:53:30.459254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.400 [2024-07-25 10:53:30.459276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:31656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.400 [2024-07-25 10:53:30.459289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.400 [2024-07-25 10:53:30.459305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:31088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.400 [2024-07-25 10:53:30.459317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.400 [2024-07-25 10:53:30.459331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:31096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.400 [2024-07-25 10:53:30.459343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.400 [2024-07-25 10:53:30.459357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:31104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.400 [2024-07-25 10:53:30.459369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.400 [2024-07-25 10:53:30.459383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:31112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.400 [2024-07-25 10:53:30.459395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.400 [2024-07-25 10:53:30.459409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:31120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.400 [2024-07-25 10:53:30.459421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.400 [2024-07-25 10:53:30.459435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:31128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.400 [2024-07-25 10:53:30.459447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.400 [2024-07-25 10:53:30.459461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:31136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.400 [2024-07-25 10:53:30.459473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.400 [2024-07-25 10:53:30.459487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:31144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.400 [2024-07-25 10:53:30.459499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.400 [2024-07-25 10:53:30.459513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:31152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.400 [2024-07-25 10:53:30.459525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.400 [2024-07-25 10:53:30.459539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.400 [2024-07-25 10:53:30.459551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.400 [2024-07-25 10:53:30.459564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.400 [2024-07-25 10:53:30.459577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.401 [2024-07-25 10:53:30.459590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:31176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.401 [2024-07-25 10:53:30.459625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.401 [2024-07-25 10:53:30.459641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.401 [2024-07-25 10:53:30.459654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.401 [2024-07-25 10:53:30.459668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:31192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.401 [2024-07-25 10:53:30.459680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.401 [2024-07-25 10:53:30.459695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:31200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.401 [2024-07-25 10:53:30.459708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.401 [2024-07-25 10:53:30.459722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:31208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.401 [2024-07-25 10:53:30.459734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.401 [2024-07-25 10:53:30.459750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:31664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.401 [2024-07-25 10:53:30.459763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.401 [2024-07-25 10:53:30.459777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:31672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.401 [2024-07-25 10:53:30.459790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.401 [2024-07-25 10:53:30.459804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:31680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.401 [2024-07-25 10:53:30.459818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.401 [2024-07-25 10:53:30.459833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:31688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.401 [2024-07-25 10:53:30.459845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.401 [2024-07-25 10:53:30.459860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.401 [2024-07-25 10:53:30.459883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.401 [2024-07-25 10:53:30.459900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:31704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.401 [2024-07-25 10:53:30.459913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.401 [2024-07-25 10:53:30.459928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:31712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.401 [2024-07-25 10:53:30.459941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.401 [2024-07-25 10:53:30.459955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:31720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:07.401 [2024-07-25 10:53:30.459967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.401 [2024-07-25 10:53:30.459990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:31216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.401 [2024-07-25 10:53:30.460004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.401 [2024-07-25 10:53:30.460018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:31224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.401 [2024-07-25 10:53:30.460030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.401 [2024-07-25 10:53:30.460060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:31232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.401 [2024-07-25 10:53:30.460072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.401 [2024-07-25 10:53:30.460086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:31240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.401 [2024-07-25 10:53:30.460098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.401 [2024-07-25 10:53:30.460115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:31248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.401 [2024-07-25 10:53:30.460127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.401 [2024-07-25 10:53:30.460141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:31256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.401 [2024-07-25 10:53:30.460154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.401 [2024-07-25 10:53:30.460168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:31264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.401 [2024-07-25 10:53:30.460181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.401 [2024-07-25 10:53:30.460195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:31272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.401 [2024-07-25 10:53:30.460206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.401 [2024-07-25 10:53:30.460221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:31280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.401 [2024-07-25 10:53:30.460234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.401 [2024-07-25 10:53:30.460247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:31288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.401 [2024-07-25 10:53:30.460259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.401 [2024-07-25 10:53:30.460273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:31296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.401 [2024-07-25 10:53:30.460285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.401 [2024-07-25 10:53:30.460299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.401 [2024-07-25 10:53:30.460312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.401 [2024-07-25 10:53:30.460325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:31312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.401 [2024-07-25 10:53:30.460357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.401 [2024-07-25 10:53:30.460372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.401 [2024-07-25 10:53:30.460385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.401 [2024-07-25 10:53:30.460398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:07.401 [2024-07-25 10:53:30.460425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.401 [2024-07-25 10:53:30.460438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2308a10 is same with the state(5) to be set 00:15:07.401 [2024-07-25 10:53:30.460453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:07.401 [2024-07-25 10:53:30.460464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:07.401 [2024-07-25 10:53:30.460474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31336 len:8 PRP1 0x0 PRP2 0x0 00:15:07.401 [2024-07-25 10:53:30.460486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.401 [2024-07-25 10:53:30.460499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:07.401 [2024-07-25 10:53:30.460508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:07.401 [2024-07-25 10:53:30.460518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31728 len:8 PRP1 0x0 PRP2 0x0 00:15:07.401 [2024-07-25 10:53:30.460530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.401 [2024-07-25 10:53:30.460542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:07.401 [2024-07-25 10:53:30.460550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:07.401 [2024-07-25 10:53:30.460559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31736 len:8 PRP1 0x0 PRP2 0x0 00:15:07.401 [2024-07-25 10:53:30.460571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.401 [2024-07-25 10:53:30.460584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:07.401 [2024-07-25 10:53:30.460593] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:07.401 [2024-07-25 10:53:30.460603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31744 len:8 PRP1 0x0 PRP2 0x0 00:15:07.401 [2024-07-25 10:53:30.460615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.401 [2024-07-25 10:53:30.460627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:07.401 [2024-07-25 10:53:30.460636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:07.401 [2024-07-25 10:53:30.460645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31752 len:8 PRP1 0x0 PRP2 0x0 00:15:07.401 [2024-07-25 10:53:30.460657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.401 [2024-07-25 10:53:30.460669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:07.401 [2024-07-25 10:53:30.460678] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:07.401 [2024-07-25 10:53:30.460687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31760 len:8 PRP1 0x0 PRP2 0x0 00:15:07.401 [2024-07-25 10:53:30.460699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.401 [2024-07-25 10:53:30.460718] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:07.402 [2024-07-25 10:53:30.460727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:07.402 [2024-07-25 10:53:30.460737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31768 len:8 PRP1 0x0 PRP2 0x0 00:15:07.402 [2024-07-25 10:53:30.460749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.402 [2024-07-25 10:53:30.460761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:07.402 [2024-07-25 10:53:30.460770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:07.402 [2024-07-25 10:53:30.460779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31776 len:8 PRP1 0x0 PRP2 0x0 00:15:07.402 [2024-07-25 10:53:30.460791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.402 [2024-07-25 10:53:30.460804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:07.402 [2024-07-25 10:53:30.460814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:07.402 [2024-07-25 10:53:30.460824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31784 len:8 PRP1 0x0 PRP2 0x0 00:15:07.402 [2024-07-25 10:53:30.460836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.402 [2024-07-25 10:53:30.460849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:07.402 [2024-07-25 10:53:30.460858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:07.402 [2024-07-25 10:53:30.460867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31792 len:8 PRP1 0x0 PRP2 0x0 00:15:07.402 [2024-07-25 10:53:30.460880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.402 [2024-07-25 10:53:30.460904] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:07.402 [2024-07-25 10:53:30.460916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:07.402 [2024-07-25 10:53:30.460926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31800 len:8 PRP1 0x0 PRP2 0x0 00:15:07.402 [2024-07-25 10:53:30.460938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.402 [2024-07-25 10:53:30.460951] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:07.402 [2024-07-25 10:53:30.460960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:07.402 [2024-07-25 10:53:30.460977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31808 len:8 PRP1 0x0 PRP2 0x0 00:15:07.402 [2024-07-25 10:53:30.460990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.402 [2024-07-25 10:53:30.461003] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:07.402 [2024-07-25 10:53:30.461012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:07.402 [2024-07-25 10:53:30.461021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31816 len:8 PRP1 0x0 PRP2 0x0 00:15:07.402 [2024-07-25 10:53:30.461033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.402 [2024-07-25 10:53:30.461046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:07.402 [2024-07-25 10:53:30.461054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:07.402 [2024-07-25 10:53:30.461064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31824 len:8 PRP1 0x0 PRP2 0x0 00:15:07.402 [2024-07-25 10:53:30.461083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.402 [2024-07-25 10:53:30.461096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:07.402 [2024-07-25 10:53:30.461105] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:07.402 [2024-07-25 10:53:30.461114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31832 len:8 PRP1 0x0 PRP2 0x0 00:15:07.402 [2024-07-25 10:53:30.461126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.402 [2024-07-25 10:53:30.461137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:07.402 [2024-07-25 10:53:30.461146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:07.402 [2024-07-25 10:53:30.461156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31840 len:8 PRP1 0x0 PRP2 0x0 00:15:07.402 [2024-07-25 10:53:30.461167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.402 [2024-07-25 10:53:30.461179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:07.402 [2024-07-25 10:53:30.461188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:07.402 [2024-07-25 10:53:30.461198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31848 len:8 PRP1 0x0 PRP2 0x0 00:15:07.402 [2024-07-25 10:53:30.461211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.402 [2024-07-25 10:53:30.461284] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2308a10 was disconnected and freed. reset controller. 00:15:07.402 [2024-07-25 10:53:30.461303] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:15:07.402 [2024-07-25 10:53:30.461318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:07.402 [2024-07-25 10:53:30.464897] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:07.402 [2024-07-25 10:53:30.464942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2299570 (9): Bad file descriptor 00:15:07.402 [2024-07-25 10:53:30.497818] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:07.402 00:15:07.402 Latency(us) 00:15:07.402 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.402 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:07.402 Verification LBA range: start 0x0 length 0x4000 00:15:07.402 NVMe0n1 : 15.01 8926.04 34.87 237.49 0.00 13935.20 677.70 16681.89 00:15:07.402 =================================================================================================================== 00:15:07.402 Total : 8926.04 34.87 237.49 0.00 13935.20 677.70 16681.89 00:15:07.402 Received shutdown signal, test time was about 15.000000 seconds 00:15:07.402 00:15:07.402 Latency(us) 00:15:07.402 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.402 =================================================================================================================== 00:15:07.402 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:07.402 10:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:15:07.402 10:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:15:07.402 10:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:15:07.402 10:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75422 00:15:07.402 10:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75422 /var/tmp/bdevperf.sock 00:15:07.402 10:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 75422 ']' 00:15:07.402 10:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:15:07.402 10:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:07.402 10:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:07.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:07.402 10:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:07.402 10:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:07.402 10:53:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:07.970 10:53:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:07.970 10:53:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:15:07.970 10:53:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:08.228 [2024-07-25 10:53:37.815439] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:08.228 10:53:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:08.487 [2024-07-25 10:53:38.079884] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:08.487 10:53:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:08.746 NVMe0n1 00:15:08.747 10:53:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:09.005 00:15:09.005 10:53:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:09.637 00:15:09.637 10:53:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:09.637 10:53:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:15:09.637 10:53:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:09.904 10:53:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:15:13.188 10:53:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:13.188 10:53:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:15:13.188 10:53:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75499 00:15:13.188 10:53:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:13.188 10:53:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 75499 00:15:14.569 0 00:15:14.569 10:53:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:14.569 [2024-07-25 10:53:36.560611] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:14.569 [2024-07-25 10:53:36.561449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75422 ] 00:15:14.569 [2024-07-25 10:53:36.694337] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.569 [2024-07-25 10:53:36.842308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.569 [2024-07-25 10:53:36.916417] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:14.569 [2024-07-25 10:53:39.521748] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:15:14.569 [2024-07-25 10:53:39.521897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.569 [2024-07-25 10:53:39.521923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.569 [2024-07-25 10:53:39.521942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.569 [2024-07-25 10:53:39.521955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.569 [2024-07-25 10:53:39.521969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.569 [2024-07-25 10:53:39.521981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.569 [2024-07-25 10:53:39.521994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.569 [2024-07-25 10:53:39.522034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.569 [2024-07-25 10:53:39.522049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:14.569 [2024-07-25 10:53:39.522106] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:14.569 [2024-07-25 10:53:39.522138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x770570 (9): Bad file descriptor 00:15:14.569 [2024-07-25 10:53:39.532732] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:14.569 Running I/O for 1 seconds... 00:15:14.569 00:15:14.569 Latency(us) 00:15:14.569 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:14.569 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:14.569 Verification LBA range: start 0x0 length 0x4000 00:15:14.569 NVMe0n1 : 1.01 8575.94 33.50 0.00 0.00 14835.83 1422.43 17873.45 00:15:14.569 =================================================================================================================== 00:15:14.569 Total : 8575.94 33.50 0.00 0.00 14835.83 1422.43 17873.45 00:15:14.569 10:53:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:14.569 10:53:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:15:14.569 10:53:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:14.828 10:53:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:14.828 10:53:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:15:15.086 10:53:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:15.357 10:53:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:15:18.702 10:53:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:18.702 10:53:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:15:18.702 10:53:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 75422 00:15:18.702 10:53:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 75422 ']' 00:15:18.702 10:53:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 75422 00:15:18.702 10:53:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:15:18.702 10:53:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:18.702 10:53:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75422 00:15:18.702 10:53:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:18.702 10:53:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:18.702 killing process with pid 75422 00:15:18.702 10:53:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75422' 00:15:18.702 10:53:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 75422 00:15:18.702 10:53:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 75422 00:15:18.961 10:53:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:15:18.961 10:53:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:19.220 10:53:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:15:19.220 10:53:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:19.220 10:53:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:15:19.220 10:53:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:19.220 10:53:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:15:19.220 10:53:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:19.220 10:53:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:15:19.220 10:53:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:19.220 10:53:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:19.220 rmmod nvme_tcp 00:15:19.220 rmmod nvme_fabrics 00:15:19.220 rmmod nvme_keyring 00:15:19.479 10:53:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:19.479 10:53:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:15:19.479 10:53:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:15:19.479 10:53:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 75152 ']' 00:15:19.479 10:53:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 75152 00:15:19.479 10:53:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 75152 ']' 00:15:19.479 10:53:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 75152 00:15:19.479 10:53:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:15:19.479 10:53:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:19.479 10:53:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75152 00:15:19.479 10:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:19.479 10:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:19.479 killing process with pid 75152 00:15:19.479 10:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75152' 00:15:19.479 10:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 75152 00:15:19.479 10:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 75152 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:19.738 00:15:19.738 real 0m33.277s 00:15:19.738 user 2m8.854s 00:15:19.738 sys 0m5.711s 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:19.738 ************************************ 00:15:19.738 END TEST nvmf_failover 00:15:19.738 ************************************ 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:19.738 ************************************ 00:15:19.738 START TEST nvmf_host_discovery 00:15:19.738 ************************************ 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:19.738 * Looking for test storage... 00:15:19.738 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=bb4b8bd3-cfb4-4368-bf29-91254747069c 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:19.738 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:19.739 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:19.739 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:19.739 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:19.739 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:19.739 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:19.739 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:19.739 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:19.739 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:19.739 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:19.739 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:19.739 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:19.739 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:19.739 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:19.739 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:19.739 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:19.997 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:19.997 Cannot find device "nvmf_tgt_br" 00:15:19.997 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:15:19.997 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:19.997 Cannot find device "nvmf_tgt_br2" 00:15:19.997 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:15:19.997 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:19.997 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:19.997 Cannot find device "nvmf_tgt_br" 00:15:19.997 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:15:19.997 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:19.997 Cannot find device "nvmf_tgt_br2" 00:15:19.997 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:15:19.997 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:19.997 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:19.997 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:19.997 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:19.997 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:15:19.997 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:19.997 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:19.997 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:15:19.997 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:19.997 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:19.997 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:19.997 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:19.997 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:19.997 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:19.997 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:19.997 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:19.997 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:19.997 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:19.997 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:19.997 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:20.256 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:20.256 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:20.256 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:20.256 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:20.256 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:20.256 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:20.256 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:20.256 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:20.256 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:20.256 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:20.256 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:20.256 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:20.256 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:20.256 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:15:20.256 00:15:20.256 --- 10.0.0.2 ping statistics --- 00:15:20.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.256 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:15:20.256 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:20.256 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:20.256 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:15:20.256 00:15:20.256 --- 10.0.0.3 ping statistics --- 00:15:20.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.256 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:15:20.256 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:20.256 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:20.256 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:15:20.256 00:15:20.256 --- 10.0.0.1 ping statistics --- 00:15:20.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.256 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:15:20.256 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:20.256 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:15:20.256 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:20.256 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:20.256 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:20.256 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:20.256 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:20.256 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:20.256 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:20.256 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:15:20.256 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:20.256 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:20.256 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:20.256 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=75770 00:15:20.256 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 75770 00:15:20.256 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:20.256 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 75770 ']' 00:15:20.256 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.256 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:20.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.256 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.256 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:20.256 10:53:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:20.256 [2024-07-25 10:53:49.904513] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:20.256 [2024-07-25 10:53:49.904613] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:20.516 [2024-07-25 10:53:50.040035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.516 [2024-07-25 10:53:50.158722] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:20.516 [2024-07-25 10:53:50.158785] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:20.516 [2024-07-25 10:53:50.158797] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:20.516 [2024-07-25 10:53:50.158807] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:20.516 [2024-07-25 10:53:50.158815] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:20.516 [2024-07-25 10:53:50.158868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:20.516 [2024-07-25 10:53:50.216578] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:21.451 10:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:21.451 10:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:15:21.451 10:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:21.451 10:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:21.451 10:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:21.451 10:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:21.451 10:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:21.451 10:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.451 10:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:21.451 [2024-07-25 10:53:50.902172] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:21.451 10:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.451 10:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:15:21.451 10:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.451 10:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:21.451 [2024-07-25 10:53:50.910281] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:15:21.451 10:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.451 10:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:15:21.451 10:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.451 10:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:21.451 null0 00:15:21.451 10:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.451 10:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:15:21.451 10:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.451 10:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:21.451 null1 00:15:21.451 10:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.451 10:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:15:21.451 10:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.451 10:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:21.451 10:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.451 10:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=75802 00:15:21.451 10:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:15:21.451 10:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 75802 /tmp/host.sock 00:15:21.451 10:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 75802 ']' 00:15:21.451 10:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:15:21.451 10:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:21.451 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:21.451 10:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:21.451 10:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:21.451 10:53:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:21.451 [2024-07-25 10:53:51.009670] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:21.451 [2024-07-25 10:53:51.009802] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75802 ] 00:15:21.451 [2024-07-25 10:53:51.153049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.710 [2024-07-25 10:53:51.303107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.710 [2024-07-25 10:53:51.380315] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:22.646 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:22.646 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:15:22.646 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:22.646 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:15:22.646 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.646 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:22.646 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.646 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:15:22.646 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.646 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:22.646 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.646 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:15:22.646 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:15:22.646 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:22.646 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.646 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:22.646 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:22.646 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:22.646 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:22.646 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.646 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:15:22.646 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:15:22.646 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:22.646 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:22.646 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:22.646 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.646 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:22.646 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:22.646 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.646 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:15:22.646 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:15:22.646 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.646 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:22.646 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.646 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:15:22.646 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:22.646 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.647 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:22.647 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:22.647 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:22.647 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:22.647 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.647 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:15:22.647 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:15:22.647 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:22.647 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.647 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:22.647 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:22.647 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:22.647 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:22.647 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.647 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:15:22.647 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:15:22.647 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.647 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:22.647 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.647 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:15:22.647 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:22.647 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:22.647 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:22.647 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.647 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:22.647 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:22.647 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.647 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:15:22.647 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:15:22.647 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:22.647 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:22.647 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:22.647 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:22.647 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.647 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:22.647 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.907 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:15:22.907 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:22.907 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.907 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:22.907 [2024-07-25 10:53:52.422804] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:22.907 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.907 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:15:22.907 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:22.907 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:22.907 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.907 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:22.907 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:22.907 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:22.907 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.907 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:15:22.907 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:15:22.907 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:22.907 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:22.907 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.907 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:22.907 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:22.908 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:22.908 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.908 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:15:22.908 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:15:22.908 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:22.908 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:22.908 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:22.908 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:22.908 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:22.908 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:22.908 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:15:22.908 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:22.908 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.908 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:22.908 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:22.908 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.908 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:22.908 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:15:22.908 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:15:22.908 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:22.908 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:15:22.908 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.908 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:22.908 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.908 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:22.908 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:22.908 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:22.908 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:22.908 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:22.909 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:15:22.909 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:22.909 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.909 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:22.909 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:22.909 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:22.909 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:22.909 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.168 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:15:23.168 10:53:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:15:23.426 [2024-07-25 10:53:53.055834] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:23.426 [2024-07-25 10:53:53.055882] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:23.426 [2024-07-25 10:53:53.055910] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:23.426 [2024-07-25 10:53:53.061902] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:15:23.426 [2024-07-25 10:53:53.119120] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:23.426 [2024-07-25 10:53:53.119149] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:23.992 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:23.992 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:23.992 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:15:23.992 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:23.992 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:23.992 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.992 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:23.992 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:23.992 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:23.992 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:15:24.251 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:24.252 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:15:24.252 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.252 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.252 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.252 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:24.252 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:24.252 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:24.252 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:24.252 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:24.252 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:15:24.252 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:24.252 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.252 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.252 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:24.252 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:24.252 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:24.252 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.252 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:24.252 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:24.252 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:15:24.252 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:15:24.252 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:24.252 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:24.252 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:24.252 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:24.252 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:24.252 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:15:24.252 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:15:24.252 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:24.252 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.252 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.252 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.511 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:15:24.511 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:24.511 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:15:24.511 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:24.511 10:53:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.511 [2024-07-25 10:53:54.008105] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:24.511 [2024-07-25 10:53:54.008456] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:15:24.511 [2024-07-25 10:53:54.008487] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:15:24.511 [2024-07-25 10:53:54.014470] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:24.511 [2024-07-25 10:53:54.079533] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:24.511 [2024-07-25 10:53:54.079571] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:24.511 [2024-07-25 10:53:54.079591] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.511 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.770 [2024-07-25 10:53:54.248780] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:15:24.770 [2024-07-25 10:53:54.248844] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:24.770 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.770 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:24.770 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:24.770 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:24.771 [2024-07-25 10:53:54.254773] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:15:24.771 [2024-07-25 10:53:54.254805] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:24.771 [2024-07-25 10:53:54.254949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:24.771 [2024-07-25 10:53:54.254989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:24.771 [2024-07-25 10:53:54.255003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:24.771 [2024-07-25 10:53:54.255013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:24.771 [2024-07-25 10:53:54.255024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:24.771 [2024-07-25 10:53:54.255033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:24.771 [2024-07-25 10:53:54.255044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:24.771 [2024-07-25 10:53:54.255053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:24.771 [2024-07-25 10:53:54.255063] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1536620 is same with the state(5) to be set 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:24.771 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.030 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:15:25.030 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:25.030 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:15:25.030 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:15:25.030 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:25.030 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:25.030 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:15:25.030 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:15:25.030 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:25.030 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:25.030 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.030 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:25.030 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:25.030 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:25.030 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.030 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:15:25.030 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:25.030 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:15:25.030 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:15:25.030 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:25.030 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:25.030 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:25.030 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:25.030 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:25.030 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:15:25.030 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:25.030 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.030 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:25.030 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:25.030 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.030 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:15:25.030 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:15:25.030 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:15:25.030 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:25.030 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:25.030 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.030 10:53:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:25.966 [2024-07-25 10:53:55.676345] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:25.966 [2024-07-25 10:53:55.676375] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:25.966 [2024-07-25 10:53:55.676410] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:25.966 [2024-07-25 10:53:55.682403] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:15:26.225 [2024-07-25 10:53:55.743144] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:26.225 [2024-07-25 10:53:55.743194] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:26.225 request: 00:15:26.225 { 00:15:26.225 "name": "nvme", 00:15:26.225 "trtype": "tcp", 00:15:26.225 "traddr": "10.0.0.2", 00:15:26.225 "adrfam": "ipv4", 00:15:26.225 "trsvcid": "8009", 00:15:26.225 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:26.225 "wait_for_attach": true, 00:15:26.225 "method": "bdev_nvme_start_discovery", 00:15:26.225 "req_id": 1 00:15:26.225 } 00:15:26.225 Got JSON-RPC error response 00:15:26.225 response: 00:15:26.225 { 00:15:26.225 "code": -17, 00:15:26.225 "message": "File exists" 00:15:26.225 } 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:26.225 request: 00:15:26.225 { 00:15:26.225 "name": "nvme_second", 00:15:26.225 "trtype": "tcp", 00:15:26.225 "traddr": "10.0.0.2", 00:15:26.225 "adrfam": "ipv4", 00:15:26.225 "trsvcid": "8009", 00:15:26.225 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:26.225 "wait_for_attach": true, 00:15:26.225 "method": "bdev_nvme_start_discovery", 00:15:26.225 "req_id": 1 00:15:26.225 } 00:15:26.225 Got JSON-RPC error response 00:15:26.225 response: 00:15:26.225 { 00:15:26.225 "code": -17, 00:15:26.225 "message": "File exists" 00:15:26.225 } 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:26.225 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:26.482 10:53:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.482 10:53:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:26.482 10:53:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:26.482 10:53:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:15:26.482 10:53:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:26.482 10:53:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:26.482 10:53:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:26.482 10:53:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:26.482 10:53:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:26.482 10:53:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:26.482 10:53:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.482 10:53:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:27.417 [2024-07-25 10:53:57.015828] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:27.417 [2024-07-25 10:53:57.015936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1572c70 with addr=10.0.0.2, port=8010 00:15:27.417 [2024-07-25 10:53:57.015962] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:27.417 [2024-07-25 10:53:57.015973] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:27.417 [2024-07-25 10:53:57.015982] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:15:28.352 [2024-07-25 10:53:58.015847] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:28.352 [2024-07-25 10:53:58.015965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1572c70 with addr=10.0.0.2, port=8010 00:15:28.352 [2024-07-25 10:53:58.016006] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:28.352 [2024-07-25 10:53:58.016018] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:28.352 [2024-07-25 10:53:58.016027] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:15:29.289 [2024-07-25 10:53:59.015658] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:15:29.289 request: 00:15:29.289 { 00:15:29.289 "name": "nvme_second", 00:15:29.289 "trtype": "tcp", 00:15:29.289 "traddr": "10.0.0.2", 00:15:29.289 "adrfam": "ipv4", 00:15:29.289 "trsvcid": "8010", 00:15:29.289 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:29.289 "wait_for_attach": false, 00:15:29.289 "attach_timeout_ms": 3000, 00:15:29.289 "method": "bdev_nvme_start_discovery", 00:15:29.289 "req_id": 1 00:15:29.289 } 00:15:29.289 Got JSON-RPC error response 00:15:29.289 response: 00:15:29.289 { 00:15:29.289 "code": -110, 00:15:29.289 "message": "Connection timed out" 00:15:29.289 } 00:15:29.289 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:29.289 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:15:29.289 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:29.289 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:29.289 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:29.289 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:15:29.289 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:29.289 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:29.289 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.549 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:29.549 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:29.549 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:29.549 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.549 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:15:29.549 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:15:29.549 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 75802 00:15:29.549 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:15:29.549 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:29.549 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:15:29.549 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:29.549 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:15:29.549 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:29.549 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:29.549 rmmod nvme_tcp 00:15:29.549 rmmod nvme_fabrics 00:15:29.549 rmmod nvme_keyring 00:15:29.549 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:29.549 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:15:29.549 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:15:29.549 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 75770 ']' 00:15:29.549 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 75770 00:15:29.549 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 75770 ']' 00:15:29.549 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 75770 00:15:29.549 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:15:29.549 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:29.549 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75770 00:15:29.549 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:29.549 killing process with pid 75770 00:15:29.549 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:29.549 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75770' 00:15:29.549 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 75770 00:15:29.549 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 75770 00:15:29.808 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:29.808 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:29.808 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:29.808 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:29.808 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:29.808 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.808 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:29.808 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.808 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:29.809 00:15:29.809 real 0m10.176s 00:15:29.809 user 0m19.538s 00:15:29.809 sys 0m2.095s 00:15:29.809 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:29.809 ************************************ 00:15:29.809 END TEST nvmf_host_discovery 00:15:29.809 ************************************ 00:15:29.809 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:30.069 ************************************ 00:15:30.069 START TEST nvmf_host_multipath_status 00:15:30.069 ************************************ 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:15:30.069 * Looking for test storage... 00:15:30.069 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=bb4b8bd3-cfb4-4368-bf29-91254747069c 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:30.069 Cannot find device "nvmf_tgt_br" 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:15:30.069 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:30.070 Cannot find device "nvmf_tgt_br2" 00:15:30.070 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:15:30.070 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:30.070 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:30.070 Cannot find device "nvmf_tgt_br" 00:15:30.070 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:15:30.070 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:30.070 Cannot find device "nvmf_tgt_br2" 00:15:30.070 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:15:30.070 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:30.070 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:30.329 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:30.329 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:30.329 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:15:30.329 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:30.329 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:30.329 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:15:30.329 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:30.329 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:30.329 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:30.329 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:30.329 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:30.329 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:30.329 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:30.329 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:30.329 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:30.329 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:30.329 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:30.329 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:30.329 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:30.329 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:30.329 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:30.329 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:30.329 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:30.329 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:30.329 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:30.329 10:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:30.329 10:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:30.329 10:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:30.329 10:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:30.329 10:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:30.329 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:30.329 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:15:30.329 00:15:30.329 --- 10.0.0.2 ping statistics --- 00:15:30.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.329 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:15:30.329 10:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:30.329 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:30.329 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.105 ms 00:15:30.329 00:15:30.329 --- 10.0.0.3 ping statistics --- 00:15:30.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.329 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:15:30.329 10:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:30.330 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:30.330 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:15:30.330 00:15:30.330 --- 10.0.0.1 ping statistics --- 00:15:30.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.330 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:15:30.330 10:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:30.330 10:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:15:30.330 10:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:30.330 10:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:30.330 10:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:30.330 10:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:30.330 10:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:30.330 10:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:30.330 10:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:30.330 10:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:15:30.330 10:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:30.330 10:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:30.330 10:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:30.589 10:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=76258 00:15:30.589 10:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 76258 00:15:30.589 10:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:30.589 10:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 76258 ']' 00:15:30.589 10:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.589 10:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:30.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.589 10:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.589 10:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:30.589 10:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:30.589 [2024-07-25 10:54:00.128915] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:30.589 [2024-07-25 10:54:00.129034] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:30.589 [2024-07-25 10:54:00.269902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:30.899 [2024-07-25 10:54:00.409265] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:30.899 [2024-07-25 10:54:00.409331] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:30.899 [2024-07-25 10:54:00.409344] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:30.899 [2024-07-25 10:54:00.409355] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:30.899 [2024-07-25 10:54:00.409364] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:30.899 [2024-07-25 10:54:00.409557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.899 [2024-07-25 10:54:00.409710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.899 [2024-07-25 10:54:00.487729] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:31.467 10:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:31.467 10:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:15:31.467 10:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:31.467 10:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:31.467 10:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:31.467 10:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:31.467 10:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76258 00:15:31.467 10:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:31.726 [2024-07-25 10:54:01.413460] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:31.726 10:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:32.293 Malloc0 00:15:32.293 10:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:15:32.293 10:54:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:32.552 10:54:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:32.811 [2024-07-25 10:54:02.431400] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:32.811 10:54:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:33.070 [2024-07-25 10:54:02.735784] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:33.070 10:54:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76308 00:15:33.070 10:54:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:15:33.070 10:54:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:33.070 10:54:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76308 /var/tmp/bdevperf.sock 00:15:33.070 10:54:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 76308 ']' 00:15:33.070 10:54:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:33.070 10:54:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:33.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:33.070 10:54:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:33.070 10:54:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:33.070 10:54:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:34.006 10:54:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:34.006 10:54:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:15:34.006 10:54:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:15:34.264 10:54:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:15:34.522 Nvme0n1 00:15:34.522 10:54:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:15:35.089 Nvme0n1 00:15:35.089 10:54:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:15:35.089 10:54:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:15:36.993 10:54:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:15:36.993 10:54:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:15:37.253 10:54:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:37.510 10:54:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:15:38.445 10:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:15:38.445 10:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:38.445 10:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:38.445 10:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:38.704 10:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:38.704 10:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:38.704 10:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:38.704 10:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:38.962 10:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:38.962 10:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:38.962 10:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:38.962 10:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:39.220 10:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:39.220 10:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:39.220 10:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:39.220 10:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:39.478 10:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:39.478 10:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:39.478 10:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:39.478 10:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:39.751 10:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:39.751 10:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:39.751 10:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:39.751 10:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:40.010 10:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:40.010 10:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:15:40.010 10:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:40.268 10:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:40.526 10:54:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:15:41.462 10:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:15:41.462 10:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:41.462 10:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:41.462 10:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:41.720 10:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:41.720 10:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:41.720 10:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:41.720 10:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:41.978 10:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:41.978 10:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:41.978 10:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:41.978 10:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:42.237 10:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:42.237 10:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:42.237 10:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:42.237 10:54:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:42.495 10:54:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:42.496 10:54:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:42.496 10:54:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:42.496 10:54:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:42.754 10:54:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:42.754 10:54:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:42.754 10:54:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:42.754 10:54:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:43.013 10:54:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:43.013 10:54:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:15:43.013 10:54:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:43.287 10:54:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:15:43.571 10:54:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:15:44.504 10:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:15:44.504 10:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:44.504 10:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:44.504 10:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:44.763 10:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:44.763 10:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:44.763 10:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:44.763 10:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:45.022 10:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:45.022 10:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:45.022 10:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:45.022 10:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:45.281 10:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:45.281 10:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:45.281 10:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:45.281 10:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:45.539 10:54:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:45.539 10:54:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:45.539 10:54:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:45.539 10:54:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:45.798 10:54:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:45.798 10:54:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:45.798 10:54:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:45.798 10:54:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:46.057 10:54:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:46.057 10:54:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:15:46.057 10:54:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:46.316 10:54:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:15:46.575 10:54:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:15:47.510 10:54:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:15:47.510 10:54:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:47.510 10:54:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:47.510 10:54:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:48.078 10:54:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:48.078 10:54:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:48.078 10:54:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:48.078 10:54:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:48.078 10:54:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:48.078 10:54:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:48.078 10:54:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:48.078 10:54:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:48.338 10:54:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:48.338 10:54:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:48.338 10:54:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:48.338 10:54:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:48.597 10:54:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:48.597 10:54:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:48.597 10:54:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:48.597 10:54:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:48.856 10:54:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:48.856 10:54:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:15:48.856 10:54:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:48.856 10:54:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:49.115 10:54:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:49.115 10:54:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:15:49.115 10:54:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:49.435 10:54:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:15:49.702 10:54:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:15:50.655 10:54:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:15:50.655 10:54:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:50.655 10:54:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:50.655 10:54:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:50.914 10:54:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:50.914 10:54:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:50.914 10:54:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:50.914 10:54:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:51.172 10:54:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:51.172 10:54:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:51.172 10:54:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:51.430 10:54:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:51.690 10:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:51.690 10:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:51.690 10:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:51.690 10:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:51.949 10:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:51.949 10:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:15:51.949 10:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:51.949 10:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:52.208 10:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:52.208 10:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:15:52.208 10:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:52.208 10:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:52.467 10:54:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:52.467 10:54:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:15:52.467 10:54:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:52.725 10:54:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:52.984 10:54:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:15:53.930 10:54:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:15:53.930 10:54:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:53.930 10:54:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:53.930 10:54:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:54.190 10:54:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:54.190 10:54:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:54.190 10:54:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:54.190 10:54:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:54.449 10:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:54.449 10:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:54.449 10:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:54.449 10:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:54.708 10:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:54.708 10:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:54.708 10:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:54.708 10:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:54.967 10:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:54.967 10:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:15:54.967 10:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:54.967 10:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:55.225 10:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:55.225 10:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:55.225 10:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:55.225 10:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:55.483 10:54:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:55.483 10:54:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:15:55.742 10:54:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:15:55.742 10:54:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:15:56.000 10:54:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:56.258 10:54:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:15:57.635 10:54:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:15:57.635 10:54:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:57.635 10:54:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:57.635 10:54:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:57.635 10:54:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:57.635 10:54:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:57.635 10:54:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:57.635 10:54:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:57.893 10:54:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:57.893 10:54:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:57.893 10:54:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:57.893 10:54:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:58.153 10:54:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:58.153 10:54:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:58.153 10:54:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:58.153 10:54:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:58.412 10:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:58.412 10:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:58.412 10:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:58.412 10:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:58.671 10:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:58.671 10:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:58.672 10:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:58.672 10:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:58.931 10:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:58.931 10:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:15:58.931 10:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:59.189 10:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:59.447 10:54:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:16:00.824 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:16:00.824 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:00.824 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:00.825 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:00.825 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:00.825 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:00.825 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:00.825 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:01.083 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:01.083 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:01.083 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:01.083 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:01.342 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:01.342 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:01.342 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:01.342 10:54:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:01.600 10:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:01.600 10:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:01.601 10:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:01.601 10:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:01.601 10:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:01.601 10:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:01.601 10:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:01.601 10:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:02.168 10:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:02.168 10:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:16:02.168 10:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:02.168 10:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:16:02.427 10:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:16:03.804 10:54:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:16:03.804 10:54:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:03.804 10:54:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:03.804 10:54:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:03.804 10:54:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:03.804 10:54:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:03.804 10:54:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:03.804 10:54:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:04.063 10:54:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:04.063 10:54:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:04.063 10:54:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:04.063 10:54:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:04.321 10:54:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:04.321 10:54:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:04.321 10:54:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:04.321 10:54:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:04.580 10:54:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:04.580 10:54:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:04.580 10:54:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:04.580 10:54:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:04.839 10:54:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:04.839 10:54:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:04.839 10:54:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:04.839 10:54:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:05.098 10:54:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:05.098 10:54:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:16:05.098 10:54:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:05.357 10:54:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:05.616 10:54:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:16:06.552 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:16:06.552 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:06.552 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:06.552 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:06.812 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:06.812 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:06.812 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:06.812 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:07.089 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:07.089 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:07.089 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:07.089 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:07.347 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:07.347 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:07.347 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:07.347 10:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:07.606 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:07.606 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:07.606 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:07.606 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:07.866 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:07.866 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:07.866 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:07.866 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:08.127 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:08.127 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76308 00:16:08.127 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 76308 ']' 00:16:08.127 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 76308 00:16:08.127 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:16:08.127 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:08.127 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76308 00:16:08.127 killing process with pid 76308 00:16:08.127 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:16:08.127 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:16:08.127 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76308' 00:16:08.127 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 76308 00:16:08.127 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 76308 00:16:08.127 Connection closed with partial response: 00:16:08.127 00:16:08.127 00:16:08.393 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76308 00:16:08.393 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:08.393 [2024-07-25 10:54:02.805563] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:08.393 [2024-07-25 10:54:02.805691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76308 ] 00:16:08.393 [2024-07-25 10:54:02.943970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.393 [2024-07-25 10:54:03.096036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:08.393 [2024-07-25 10:54:03.176293] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:08.393 Running I/O for 90 seconds... 00:16:08.393 [2024-07-25 10:54:19.037746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.393 [2024-07-25 10:54:19.037832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:08.393 [2024-07-25 10:54:19.037928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.393 [2024-07-25 10:54:19.037952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:08.393 [2024-07-25 10:54:19.037976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.393 [2024-07-25 10:54:19.037992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:08.393 [2024-07-25 10:54:19.038013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:18688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.393 [2024-07-25 10:54:19.038049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:08.393 [2024-07-25 10:54:19.038072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.393 [2024-07-25 10:54:19.038087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:08.393 [2024-07-25 10:54:19.038108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.393 [2024-07-25 10:54:19.038122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:08.393 [2024-07-25 10:54:19.038144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.393 [2024-07-25 10:54:19.038159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:08.393 [2024-07-25 10:54:19.038180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.393 [2024-07-25 10:54:19.038194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:08.393 [2024-07-25 10:54:19.038224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.393 [2024-07-25 10:54:19.038239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:08.393 [2024-07-25 10:54:19.038261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.393 [2024-07-25 10:54:19.038275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:08.393 [2024-07-25 10:54:19.038296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.393 [2024-07-25 10:54:19.038349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:08.393 [2024-07-25 10:54:19.038382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.393 [2024-07-25 10:54:19.038397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:08.393 [2024-07-25 10:54:19.038418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.393 [2024-07-25 10:54:19.038433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:08.393 [2024-07-25 10:54:19.038454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.393 [2024-07-25 10:54:19.038469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:08.393 [2024-07-25 10:54:19.038490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.393 [2024-07-25 10:54:19.038504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:08.393 [2024-07-25 10:54:19.038528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.393 [2024-07-25 10:54:19.038543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:08.393 [2024-07-25 10:54:19.038568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.393 [2024-07-25 10:54:19.038582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:08.393 [2024-07-25 10:54:19.038603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.393 [2024-07-25 10:54:19.038617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:08.393 [2024-07-25 10:54:19.038638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.393 [2024-07-25 10:54:19.038652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:08.393 [2024-07-25 10:54:19.038673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.393 [2024-07-25 10:54:19.038687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:08.393 [2024-07-25 10:54:19.038711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.393 [2024-07-25 10:54:19.038725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:08.393 [2024-07-25 10:54:19.038746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.393 [2024-07-25 10:54:19.038760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:08.394 [2024-07-25 10:54:19.038781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.394 [2024-07-25 10:54:19.038806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:08.394 [2024-07-25 10:54:19.038830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.394 [2024-07-25 10:54:19.038845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:08.394 [2024-07-25 10:54:19.039051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.394 [2024-07-25 10:54:19.039075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:08.394 [2024-07-25 10:54:19.039099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.394 [2024-07-25 10:54:19.039115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:08.394 [2024-07-25 10:54:19.039138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.394 [2024-07-25 10:54:19.039153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:08.394 [2024-07-25 10:54:19.039174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.394 [2024-07-25 10:54:19.039189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:08.394 [2024-07-25 10:54:19.039210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.394 [2024-07-25 10:54:19.039225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:08.394 [2024-07-25 10:54:19.039248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.394 [2024-07-25 10:54:19.039262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:08.394 [2024-07-25 10:54:19.039284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.394 [2024-07-25 10:54:19.039298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:08.394 [2024-07-25 10:54:19.039320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.394 [2024-07-25 10:54:19.039335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:08.394 [2024-07-25 10:54:19.039357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.394 [2024-07-25 10:54:19.039373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:08.394 [2024-07-25 10:54:19.039396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.394 [2024-07-25 10:54:19.039410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:08.394 [2024-07-25 10:54:19.039432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.394 [2024-07-25 10:54:19.039458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:08.394 [2024-07-25 10:54:19.039483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.394 [2024-07-25 10:54:19.039499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:08.394 [2024-07-25 10:54:19.039521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:18376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.394 [2024-07-25 10:54:19.039536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:08.394 [2024-07-25 10:54:19.039559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.394 [2024-07-25 10:54:19.039574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:08.394 [2024-07-25 10:54:19.039596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.394 [2024-07-25 10:54:19.039612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:08.394 [2024-07-25 10:54:19.039635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.394 [2024-07-25 10:54:19.039650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:08.394 [2024-07-25 10:54:19.039671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:18792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.394 [2024-07-25 10:54:19.039686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.394 [2024-07-25 10:54:19.039708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.394 [2024-07-25 10:54:19.039723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:08.394 [2024-07-25 10:54:19.039745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.394 [2024-07-25 10:54:19.039760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:08.394 [2024-07-25 10:54:19.039781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:18816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.394 [2024-07-25 10:54:19.039796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:08.394 [2024-07-25 10:54:19.039818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.394 [2024-07-25 10:54:19.039832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:08.394 [2024-07-25 10:54:19.039866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.394 [2024-07-25 10:54:19.039883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:08.394 [2024-07-25 10:54:19.039915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:18840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.394 [2024-07-25 10:54:19.039930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:08.394 [2024-07-25 10:54:19.039962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.394 [2024-07-25 10:54:19.039978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:08.394 [2024-07-25 10:54:19.040001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.394 [2024-07-25 10:54:19.040016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:08.394 [2024-07-25 10:54:19.040039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.394 [2024-07-25 10:54:19.040053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:08.394 [2024-07-25 10:54:19.040075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.394 [2024-07-25 10:54:19.040090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:08.394 [2024-07-25 10:54:19.040113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.394 [2024-07-25 10:54:19.040127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:08.394 [2024-07-25 10:54:19.040149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.394 [2024-07-25 10:54:19.040164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:08.394 [2024-07-25 10:54:19.040187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.394 [2024-07-25 10:54:19.040202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:08.394 [2024-07-25 10:54:19.040224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.394 [2024-07-25 10:54:19.040239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:08.394 [2024-07-25 10:54:19.040261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.394 [2024-07-25 10:54:19.040276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:08.394 [2024-07-25 10:54:19.040297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.394 [2024-07-25 10:54:19.040313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:08.394 [2024-07-25 10:54:19.040335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.394 [2024-07-25 10:54:19.040349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:08.394 [2024-07-25 10:54:19.040371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.394 [2024-07-25 10:54:19.040386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:08.394 [2024-07-25 10:54:19.040415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.394 [2024-07-25 10:54:19.040431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:08.395 [2024-07-25 10:54:19.040453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:18888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.395 [2024-07-25 10:54:19.040468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:08.395 [2024-07-25 10:54:19.040489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:18896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.395 [2024-07-25 10:54:19.040512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:08.395 [2024-07-25 10:54:19.040534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.395 [2024-07-25 10:54:19.040548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:08.395 [2024-07-25 10:54:19.040570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:18912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.395 [2024-07-25 10:54:19.040585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:08.395 [2024-07-25 10:54:19.040608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.395 [2024-07-25 10:54:19.040624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:08.395 [2024-07-25 10:54:19.040646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.395 [2024-07-25 10:54:19.040661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:08.395 [2024-07-25 10:54:19.040682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.395 [2024-07-25 10:54:19.040698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:08.395 [2024-07-25 10:54:19.040720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.395 [2024-07-25 10:54:19.040734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:08.395 [2024-07-25 10:54:19.040756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.395 [2024-07-25 10:54:19.040771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:08.395 [2024-07-25 10:54:19.040793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.395 [2024-07-25 10:54:19.040808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:08.395 [2024-07-25 10:54:19.040830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:18520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.395 [2024-07-25 10:54:19.040845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:08.395 [2024-07-25 10:54:19.040884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.395 [2024-07-25 10:54:19.040913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:08.395 [2024-07-25 10:54:19.040954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.395 [2024-07-25 10:54:19.040975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:08.395 [2024-07-25 10:54:19.040999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.395 [2024-07-25 10:54:19.041014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:08.395 [2024-07-25 10:54:19.041036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.395 [2024-07-25 10:54:19.041050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:08.395 [2024-07-25 10:54:19.041072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.395 [2024-07-25 10:54:19.041086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:08.395 [2024-07-25 10:54:19.041108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.395 [2024-07-25 10:54:19.041123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:08.395 [2024-07-25 10:54:19.041145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:18960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.395 [2024-07-25 10:54:19.041160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:08.395 [2024-07-25 10:54:19.041182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.395 [2024-07-25 10:54:19.041197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:08.395 [2024-07-25 10:54:19.041218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:18976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.395 [2024-07-25 10:54:19.041233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:08.395 [2024-07-25 10:54:19.041256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.395 [2024-07-25 10:54:19.041276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:08.395 [2024-07-25 10:54:19.041298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.395 [2024-07-25 10:54:19.041313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:08.395 [2024-07-25 10:54:19.041335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.395 [2024-07-25 10:54:19.041350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:08.395 [2024-07-25 10:54:19.041371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.395 [2024-07-25 10:54:19.041394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:08.395 [2024-07-25 10:54:19.041418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.395 [2024-07-25 10:54:19.041433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:08.395 [2024-07-25 10:54:19.041454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.395 [2024-07-25 10:54:19.041469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:08.395 [2024-07-25 10:54:19.041491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.395 [2024-07-25 10:54:19.041507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:08.395 [2024-07-25 10:54:19.041528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.395 [2024-07-25 10:54:19.041543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:08.395 [2024-07-25 10:54:19.041564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.395 [2024-07-25 10:54:19.041579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:08.395 [2024-07-25 10:54:19.041601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.395 [2024-07-25 10:54:19.041621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:08.395 [2024-07-25 10:54:19.041643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.395 [2024-07-25 10:54:19.041658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:08.395 [2024-07-25 10:54:19.041679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.395 [2024-07-25 10:54:19.041695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:08.395 [2024-07-25 10:54:19.041716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.395 [2024-07-25 10:54:19.041731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:08.395 [2024-07-25 10:54:19.041753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.395 [2024-07-25 10:54:19.041768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:08.395 [2024-07-25 10:54:19.041789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.395 [2024-07-25 10:54:19.041804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:08.395 [2024-07-25 10:54:19.041825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.396 [2024-07-25 10:54:19.041840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:08.396 [2024-07-25 10:54:19.041898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.396 [2024-07-25 10:54:19.041924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:08.396 [2024-07-25 10:54:19.041947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.396 [2024-07-25 10:54:19.041962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:08.396 [2024-07-25 10:54:19.041983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:18552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.396 [2024-07-25 10:54:19.041998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:08.396 [2024-07-25 10:54:19.042030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.396 [2024-07-25 10:54:19.042047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:08.396 [2024-07-25 10:54:19.042071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:18568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.396 [2024-07-25 10:54:19.042085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:08.396 [2024-07-25 10:54:19.042109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.396 [2024-07-25 10:54:19.042123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:08.396 [2024-07-25 10:54:19.042154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.396 [2024-07-25 10:54:19.042168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:08.396 [2024-07-25 10:54:19.042190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.396 [2024-07-25 10:54:19.042205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:08.396 [2024-07-25 10:54:19.042227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.396 [2024-07-25 10:54:19.042242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:08.396 [2024-07-25 10:54:19.042264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.396 [2024-07-25 10:54:19.042278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:08.396 [2024-07-25 10:54:19.042300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.396 [2024-07-25 10:54:19.042315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:08.396 [2024-07-25 10:54:19.042337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.396 [2024-07-25 10:54:19.042351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:08.396 [2024-07-25 10:54:19.042381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.396 [2024-07-25 10:54:19.042397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:08.396 [2024-07-25 10:54:19.042419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.396 [2024-07-25 10:54:19.042434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:08.396 [2024-07-25 10:54:19.042456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.396 [2024-07-25 10:54:19.042471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:08.396 [2024-07-25 10:54:19.043215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.396 [2024-07-25 10:54:19.043243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:08.396 [2024-07-25 10:54:19.043285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.396 [2024-07-25 10:54:19.043308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:08.396 [2024-07-25 10:54:19.043339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.396 [2024-07-25 10:54:19.043354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:08.396 [2024-07-25 10:54:19.043384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.396 [2024-07-25 10:54:19.043399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:08.396 [2024-07-25 10:54:19.043429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.396 [2024-07-25 10:54:19.043444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:08.396 [2024-07-25 10:54:19.043474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.396 [2024-07-25 10:54:19.043488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:08.396 [2024-07-25 10:54:19.043518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.396 [2024-07-25 10:54:19.043533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:08.396 [2024-07-25 10:54:19.043564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.396 [2024-07-25 10:54:19.043579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:08.396 [2024-07-25 10:54:19.043623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.396 [2024-07-25 10:54:19.043643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:08.396 [2024-07-25 10:54:19.043674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.396 [2024-07-25 10:54:19.043702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:08.396 [2024-07-25 10:54:19.043734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.396 [2024-07-25 10:54:19.043750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:08.396 [2024-07-25 10:54:19.043779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.396 [2024-07-25 10:54:19.043795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:08.396 [2024-07-25 10:54:19.043825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.396 [2024-07-25 10:54:19.043840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:08.396 [2024-07-25 10:54:35.103165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:116056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.396 [2024-07-25 10:54:35.103242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:08.396 [2024-07-25 10:54:35.103304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:116376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.396 [2024-07-25 10:54:35.103323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:08.396 [2024-07-25 10:54:35.103347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:116392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.396 [2024-07-25 10:54:35.103361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:08.396 [2024-07-25 10:54:35.103381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:116408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.396 [2024-07-25 10:54:35.103395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:08.396 [2024-07-25 10:54:35.103414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:116424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.396 [2024-07-25 10:54:35.103427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:08.397 [2024-07-25 10:54:35.103446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:116440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.397 [2024-07-25 10:54:35.103459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:08.397 [2024-07-25 10:54:35.103478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:116456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.397 [2024-07-25 10:54:35.103490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:08.397 [2024-07-25 10:54:35.103509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:116472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.397 [2024-07-25 10:54:35.103530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:08.397 [2024-07-25 10:54:35.103549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:116488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.397 [2024-07-25 10:54:35.103590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:08.397 [2024-07-25 10:54:35.103614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:116504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.397 [2024-07-25 10:54:35.103628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:08.397 [2024-07-25 10:54:35.103647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:116520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.397 [2024-07-25 10:54:35.103659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:08.397 [2024-07-25 10:54:35.103678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:116080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.397 [2024-07-25 10:54:35.103691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:08.397 [2024-07-25 10:54:35.103710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:116112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.397 [2024-07-25 10:54:35.103723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:08.397 [2024-07-25 10:54:35.103741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:116144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.397 [2024-07-25 10:54:35.103754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:08.397 [2024-07-25 10:54:35.103773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:115968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.397 [2024-07-25 10:54:35.103786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:08.397 [2024-07-25 10:54:35.103805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:116000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.397 [2024-07-25 10:54:35.103818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:08.397 [2024-07-25 10:54:35.103837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:116032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.397 [2024-07-25 10:54:35.103864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:08.397 [2024-07-25 10:54:35.103894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:116064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.397 [2024-07-25 10:54:35.103908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:08.397 [2024-07-25 10:54:35.103927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:116192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.397 [2024-07-25 10:54:35.103939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:08.397 [2024-07-25 10:54:35.103958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:116224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.397 [2024-07-25 10:54:35.103970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:08.397 [2024-07-25 10:54:35.103988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:116256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.397 [2024-07-25 10:54:35.104001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:08.397 [2024-07-25 10:54:35.104030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:116536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.397 [2024-07-25 10:54:35.104045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:08.397 [2024-07-25 10:54:35.104064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:116552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.397 [2024-07-25 10:54:35.104076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:08.397 [2024-07-25 10:54:35.104095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:116568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.397 [2024-07-25 10:54:35.104109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:08.397 [2024-07-25 10:54:35.104127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:116584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.397 [2024-07-25 10:54:35.104140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:08.397 [2024-07-25 10:54:35.104159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:116600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.397 [2024-07-25 10:54:35.104171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:08.397 [2024-07-25 10:54:35.104189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:116616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.397 [2024-07-25 10:54:35.104202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:08.397 [2024-07-25 10:54:35.104221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:116632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.397 [2024-07-25 10:54:35.104233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:08.397 [2024-07-25 10:54:35.104251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:116272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.397 [2024-07-25 10:54:35.104264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:08.397 [2024-07-25 10:54:35.104282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:116304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.397 [2024-07-25 10:54:35.104295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:08.397 [2024-07-25 10:54:35.104313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:116336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.397 [2024-07-25 10:54:35.104326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:08.397 [2024-07-25 10:54:35.104344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:116088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.397 [2024-07-25 10:54:35.104356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:08.397 [2024-07-25 10:54:35.104375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:116120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.397 [2024-07-25 10:54:35.104395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:08.397 [2024-07-25 10:54:35.104422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:116152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.397 [2024-07-25 10:54:35.104436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:08.397 [2024-07-25 10:54:35.104455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:116648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.397 [2024-07-25 10:54:35.104467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:08.397 [2024-07-25 10:54:35.104486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:116664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.397 [2024-07-25 10:54:35.104499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:08.397 [2024-07-25 10:54:35.104517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:116680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.397 [2024-07-25 10:54:35.104530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:08.397 [2024-07-25 10:54:35.104548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:116368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.397 [2024-07-25 10:54:35.104561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.397 [2024-07-25 10:54:35.104580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:116400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.397 [2024-07-25 10:54:35.104592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:08.397 [2024-07-25 10:54:35.104611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:116432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.397 [2024-07-25 10:54:35.104624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:08.397 [2024-07-25 10:54:35.104642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:116464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.397 [2024-07-25 10:54:35.104655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:08.398 [2024-07-25 10:54:35.104673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:116496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.398 [2024-07-25 10:54:35.104686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:08.398 [2024-07-25 10:54:35.104704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:116696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.398 [2024-07-25 10:54:35.104717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:08.398 [2024-07-25 10:54:35.104735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:116712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.398 [2024-07-25 10:54:35.104748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:08.398 [2024-07-25 10:54:35.104767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:116728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.398 [2024-07-25 10:54:35.104779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:08.398 [2024-07-25 10:54:35.104806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:116744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.398 [2024-07-25 10:54:35.104820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:08.398 [2024-07-25 10:54:35.104932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:116760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.398 [2024-07-25 10:54:35.104953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:08.398 [2024-07-25 10:54:35.104983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:116776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.398 [2024-07-25 10:54:35.105013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:08.398 [2024-07-25 10:54:35.105035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:116792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.398 [2024-07-25 10:54:35.105050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:08.398 [2024-07-25 10:54:35.105071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:116808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.398 [2024-07-25 10:54:35.105085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:08.398 [2024-07-25 10:54:35.105105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:116824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.398 [2024-07-25 10:54:35.105120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:08.398 [2024-07-25 10:54:35.105141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:116840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.398 [2024-07-25 10:54:35.105155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:08.398 [2024-07-25 10:54:35.105176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:116184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.398 [2024-07-25 10:54:35.105196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:08.398 [2024-07-25 10:54:35.105217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:116216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.398 [2024-07-25 10:54:35.105231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:08.398 [2024-07-25 10:54:35.105267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:116248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.398 [2024-07-25 10:54:35.105295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:08.398 [2024-07-25 10:54:35.105315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:116856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.398 [2024-07-25 10:54:35.105343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:08.398 [2024-07-25 10:54:35.105362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:116872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.398 [2024-07-25 10:54:35.105375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:08.398 [2024-07-25 10:54:35.105393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:116888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.398 [2024-07-25 10:54:35.105419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:08.398 [2024-07-25 10:54:35.105455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:116904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.398 [2024-07-25 10:54:35.105468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:08.398 [2024-07-25 10:54:35.105487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:116920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.398 [2024-07-25 10:54:35.105499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:08.398 [2024-07-25 10:54:35.105528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:116936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.398 [2024-07-25 10:54:35.105540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:08.398 [2024-07-25 10:54:35.105559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:116952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.398 [2024-07-25 10:54:35.105572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:08.398 [2024-07-25 10:54:35.105590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:116968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.398 [2024-07-25 10:54:35.105602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:08.398 [2024-07-25 10:54:35.105622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:116280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.398 [2024-07-25 10:54:35.105635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:08.398 [2024-07-25 10:54:35.105654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:116312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.398 [2024-07-25 10:54:35.105678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:08.398 [2024-07-25 10:54:35.105697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:116344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.399 [2024-07-25 10:54:35.105709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:08.399 [2024-07-25 10:54:35.106785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:116984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.399 [2024-07-25 10:54:35.106812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:08.399 [2024-07-25 10:54:35.106837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:117000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.399 [2024-07-25 10:54:35.106856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:08.399 [2024-07-25 10:54:35.106875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:117016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.399 [2024-07-25 10:54:35.106889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:08.399 [2024-07-25 10:54:35.106907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:117032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.399 [2024-07-25 10:54:35.106946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:08.399 [2024-07-25 10:54:35.106969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:116544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.399 [2024-07-25 10:54:35.106983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:08.399 [2024-07-25 10:54:35.107002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:116576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.399 [2024-07-25 10:54:35.107015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:08.399 [2024-07-25 10:54:35.107041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:116608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.399 [2024-07-25 10:54:35.107054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:08.399 [2024-07-25 10:54:35.107083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:117040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.399 [2024-07-25 10:54:35.107097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:08.399 [2024-07-25 10:54:35.107116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:117056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.399 [2024-07-25 10:54:35.107129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:08.399 [2024-07-25 10:54:35.107148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:117064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.399 [2024-07-25 10:54:35.107161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:08.399 [2024-07-25 10:54:35.107196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:117080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.399 [2024-07-25 10:54:35.107214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:08.399 [2024-07-25 10:54:35.107234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:117096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.399 [2024-07-25 10:54:35.107248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:08.399 Received shutdown signal, test time was about 33.041325 seconds 00:16:08.399 00:16:08.399 Latency(us) 00:16:08.399 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:08.399 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:08.399 Verification LBA range: start 0x0 length 0x4000 00:16:08.399 Nvme0n1 : 33.04 9026.97 35.26 0.00 0.00 14150.39 181.53 4026531.84 00:16:08.399 =================================================================================================================== 00:16:08.399 Total : 9026.97 35.26 0.00 0.00 14150.39 181.53 4026531.84 00:16:08.399 10:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:08.659 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:16:08.659 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:08.659 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:16:08.659 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:08.659 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:16:08.659 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:08.659 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:16:08.659 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:08.659 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:08.659 rmmod nvme_tcp 00:16:08.659 rmmod nvme_fabrics 00:16:08.659 rmmod nvme_keyring 00:16:08.659 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:08.659 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:16:08.659 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:16:08.659 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 76258 ']' 00:16:08.659 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 76258 00:16:08.659 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 76258 ']' 00:16:08.659 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 76258 00:16:08.659 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:16:08.659 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:08.659 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76258 00:16:08.659 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:08.659 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:08.659 killing process with pid 76258 00:16:08.659 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76258' 00:16:08.659 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 76258 00:16:08.659 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 76258 00:16:09.228 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:09.228 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:09.228 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:09.228 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:09.228 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:09.228 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:09.228 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:09.228 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:09.228 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:09.228 00:16:09.228 real 0m39.174s 00:16:09.228 user 2m5.544s 00:16:09.228 sys 0m11.950s 00:16:09.228 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:09.228 ************************************ 00:16:09.228 END TEST nvmf_host_multipath_status 00:16:09.228 ************************************ 00:16:09.228 10:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:09.228 10:54:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:09.228 10:54:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:09.228 10:54:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:09.228 10:54:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.228 ************************************ 00:16:09.228 START TEST nvmf_discovery_remove_ifc 00:16:09.228 ************************************ 00:16:09.228 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:09.228 * Looking for test storage... 00:16:09.228 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:09.228 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:09.228 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:16:09.228 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:09.228 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:09.228 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:09.228 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:09.228 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:09.228 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:09.228 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:09.228 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:09.228 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:09.228 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:09.228 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:16:09.228 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=bb4b8bd3-cfb4-4368-bf29-91254747069c 00:16:09.228 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:09.228 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:09.228 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:09.228 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:09.228 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:09.229 Cannot find device "nvmf_tgt_br" 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:09.229 Cannot find device "nvmf_tgt_br2" 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:09.229 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:09.488 Cannot find device "nvmf_tgt_br" 00:16:09.488 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:16:09.488 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:09.488 Cannot find device "nvmf_tgt_br2" 00:16:09.488 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:16:09.488 10:54:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:09.488 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:09.488 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:09.488 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:09.488 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:16:09.488 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:09.488 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:09.488 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:16:09.488 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:09.488 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:09.488 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:09.488 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:09.488 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:09.488 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:09.488 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:09.489 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:09.489 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:09.489 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:09.489 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:09.489 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:09.489 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:09.489 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:09.489 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:09.489 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:09.489 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:09.489 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:09.489 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:09.489 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:09.747 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:09.747 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:09.747 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:09.747 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:09.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:09.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:16:09.747 00:16:09.747 --- 10.0.0.2 ping statistics --- 00:16:09.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.747 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:16:09.747 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:09.747 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:09.747 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:16:09.747 00:16:09.747 --- 10.0.0.3 ping statistics --- 00:16:09.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.747 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:16:09.747 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:09.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:09.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:16:09.747 00:16:09.747 --- 10.0.0.1 ping statistics --- 00:16:09.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.747 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:16:09.747 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:09.747 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:16:09.747 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:09.747 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:09.747 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:09.747 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:09.747 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:09.747 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:09.747 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:09.747 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:16:09.747 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:09.747 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:09.747 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:09.747 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=77094 00:16:09.747 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:09.747 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 77094 00:16:09.747 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 77094 ']' 00:16:09.747 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.747 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:09.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.747 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.747 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:09.747 10:54:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:09.747 [2024-07-25 10:54:39.355454] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:09.747 [2024-07-25 10:54:39.355582] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:10.006 [2024-07-25 10:54:39.498145] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.006 [2024-07-25 10:54:39.627825] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:10.006 [2024-07-25 10:54:39.627894] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:10.006 [2024-07-25 10:54:39.627909] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:10.006 [2024-07-25 10:54:39.627919] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:10.006 [2024-07-25 10:54:39.627929] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:10.006 [2024-07-25 10:54:39.627971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:10.006 [2024-07-25 10:54:39.685494] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:10.940 10:54:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:10.940 10:54:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:16:10.940 10:54:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:10.940 10:54:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:10.940 10:54:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:10.940 10:54:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:10.940 10:54:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:16:10.940 10:54:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.940 10:54:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:10.940 [2024-07-25 10:54:40.441556] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:10.940 [2024-07-25 10:54:40.449696] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:10.940 null0 00:16:10.940 [2024-07-25 10:54:40.481556] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:10.940 10:54:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.940 10:54:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77126 00:16:10.940 10:54:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:16:10.940 10:54:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77126 /tmp/host.sock 00:16:10.940 10:54:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 77126 ']' 00:16:10.940 10:54:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:16:10.940 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:10.940 10:54:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:10.940 10:54:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:10.940 10:54:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:10.940 10:54:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:10.940 [2024-07-25 10:54:40.563264] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:10.940 [2024-07-25 10:54:40.563371] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77126 ] 00:16:11.198 [2024-07-25 10:54:40.706298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.198 [2024-07-25 10:54:40.873436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.135 10:54:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:12.135 10:54:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:16:12.135 10:54:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:12.135 10:54:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:16:12.135 10:54:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.135 10:54:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:12.135 10:54:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.135 10:54:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:16:12.135 10:54:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.135 10:54:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:12.135 [2024-07-25 10:54:41.626493] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:12.135 10:54:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.136 10:54:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:16:12.136 10:54:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.136 10:54:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:13.078 [2024-07-25 10:54:42.684881] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:13.078 [2024-07-25 10:54:42.684934] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:13.078 [2024-07-25 10:54:42.684953] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:13.078 [2024-07-25 10:54:42.690956] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:13.078 [2024-07-25 10:54:42.749549] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:13.078 [2024-07-25 10:54:42.749644] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:13.078 [2024-07-25 10:54:42.749674] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:13.078 [2024-07-25 10:54:42.749691] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:13.078 [2024-07-25 10:54:42.749719] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:13.078 10:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.078 10:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:16:13.078 10:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:13.078 [2024-07-25 10:54:42.753508] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1b27ef0 was disconnected and freed. delete nvme_qpair. 00:16:13.078 10:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:13.078 10:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.078 10:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:13.078 10:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:13.078 10:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:13.079 10:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:13.079 10:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.079 10:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:16:13.079 10:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:16:13.337 10:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:16:13.337 10:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:16:13.337 10:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:13.337 10:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:13.337 10:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.337 10:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:13.337 10:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:13.337 10:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:13.337 10:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:13.337 10:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.337 10:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:13.337 10:54:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:14.268 10:54:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:14.268 10:54:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:14.268 10:54:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:14.268 10:54:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.268 10:54:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:14.268 10:54:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:14.268 10:54:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:14.268 10:54:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.268 10:54:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:14.268 10:54:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:15.642 10:54:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:15.642 10:54:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:15.642 10:54:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.642 10:54:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:15.642 10:54:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:15.642 10:54:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:15.642 10:54:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:15.642 10:54:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.642 10:54:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:15.642 10:54:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:16.576 10:54:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:16.576 10:54:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:16.576 10:54:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:16.576 10:54:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.576 10:54:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:16.576 10:54:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:16.576 10:54:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:16.576 10:54:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.576 10:54:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:16.576 10:54:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:17.510 10:54:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:17.510 10:54:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:17.510 10:54:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.510 10:54:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:17.510 10:54:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:17.510 10:54:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:17.510 10:54:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:17.510 10:54:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.510 10:54:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:17.510 10:54:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:18.445 10:54:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:18.445 10:54:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:18.445 10:54:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.445 10:54:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:18.445 10:54:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:18.445 10:54:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:18.445 10:54:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:18.445 10:54:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.445 [2024-07-25 10:54:48.177074] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:16:18.445 [2024-07-25 10:54:48.177143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:18.445 [2024-07-25 10:54:48.177159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:18.445 [2024-07-25 10:54:48.177172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:18.445 [2024-07-25 10:54:48.177181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:18.445 [2024-07-25 10:54:48.177190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:18.445 [2024-07-25 10:54:48.177198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:18.445 [2024-07-25 10:54:48.177208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:18.445 [2024-07-25 10:54:48.177216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:18.445 [2024-07-25 10:54:48.177225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:18.445 [2024-07-25 10:54:48.177234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:18.445 [2024-07-25 10:54:48.177242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8dac0 is same with the state(5) to be set 00:16:18.703 [2024-07-25 10:54:48.187070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a8dac0 (9): Bad file descriptor 00:16:18.703 [2024-07-25 10:54:48.197110] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:18.703 10:54:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:18.703 10:54:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:19.637 10:54:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:19.637 10:54:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:19.637 10:54:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:19.637 10:54:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.637 10:54:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:19.637 10:54:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:19.637 10:54:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:19.637 [2024-07-25 10:54:49.254978] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:16:19.637 [2024-07-25 10:54:49.255114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a8dac0 with addr=10.0.0.2, port=4420 00:16:19.637 [2024-07-25 10:54:49.255146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8dac0 is same with the state(5) to be set 00:16:19.637 [2024-07-25 10:54:49.255209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a8dac0 (9): Bad file descriptor 00:16:19.637 [2024-07-25 10:54:49.256015] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:19.637 [2024-07-25 10:54:49.256095] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:19.637 [2024-07-25 10:54:49.256114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:19.637 [2024-07-25 10:54:49.256133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:19.637 [2024-07-25 10:54:49.256187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:19.637 [2024-07-25 10:54:49.256207] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:19.637 10:54:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.637 10:54:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:19.637 10:54:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:20.571 [2024-07-25 10:54:50.256274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:20.571 [2024-07-25 10:54:50.256331] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:20.571 [2024-07-25 10:54:50.256360] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:20.571 [2024-07-25 10:54:50.256371] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:16:20.571 [2024-07-25 10:54:50.256398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:20.571 [2024-07-25 10:54:50.256433] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:16:20.571 [2024-07-25 10:54:50.256497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:20.571 [2024-07-25 10:54:50.256515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.571 [2024-07-25 10:54:50.256529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:20.571 [2024-07-25 10:54:50.256539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.571 [2024-07-25 10:54:50.256549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:20.571 [2024-07-25 10:54:50.256557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.571 [2024-07-25 10:54:50.256567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:20.571 [2024-07-25 10:54:50.256576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.571 [2024-07-25 10:54:50.256586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:20.571 [2024-07-25 10:54:50.256595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.571 [2024-07-25 10:54:50.256604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:16:20.571 [2024-07-25 10:54:50.256624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a91860 (9): Bad file descriptor 00:16:20.571 [2024-07-25 10:54:50.257417] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:16:20.571 [2024-07-25 10:54:50.257441] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:16:20.571 10:54:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:20.571 10:54:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:20.571 10:54:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:20.571 10:54:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.571 10:54:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:20.571 10:54:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:20.571 10:54:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:20.571 10:54:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.828 10:54:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:16:20.828 10:54:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:20.828 10:54:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:20.829 10:54:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:16:20.829 10:54:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:20.829 10:54:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:20.829 10:54:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.829 10:54:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:20.829 10:54:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:20.829 10:54:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:20.829 10:54:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:20.829 10:54:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.829 10:54:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:20.829 10:54:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:21.767 10:54:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:21.767 10:54:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:21.767 10:54:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:21.767 10:54:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.767 10:54:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:21.767 10:54:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:21.767 10:54:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:21.767 10:54:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.767 10:54:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:21.767 10:54:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:22.702 [2024-07-25 10:54:52.269341] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:22.702 [2024-07-25 10:54:52.269390] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:22.702 [2024-07-25 10:54:52.269409] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:22.702 [2024-07-25 10:54:52.275376] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:16:22.702 [2024-07-25 10:54:52.332248] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:22.702 [2024-07-25 10:54:52.332350] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:22.702 [2024-07-25 10:54:52.332378] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:22.702 [2024-07-25 10:54:52.332395] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:16:22.702 [2024-07-25 10:54:52.332403] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:22.702 [2024-07-25 10:54:52.338107] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1b05460 was disconnected and freed. delete nvme_qpair. 00:16:22.961 10:54:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:22.961 10:54:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:22.961 10:54:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:22.961 10:54:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.961 10:54:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:22.961 10:54:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:22.961 10:54:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:22.961 10:54:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.961 10:54:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:16:22.961 10:54:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:16:22.961 10:54:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77126 00:16:22.961 10:54:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 77126 ']' 00:16:22.961 10:54:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 77126 00:16:22.961 10:54:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:16:22.961 10:54:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:22.961 10:54:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77126 00:16:22.961 killing process with pid 77126 00:16:22.961 10:54:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:22.961 10:54:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:22.961 10:54:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77126' 00:16:22.961 10:54:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 77126 00:16:22.961 10:54:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 77126 00:16:23.220 10:54:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:16:23.220 10:54:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:23.220 10:54:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:16:23.220 10:54:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:23.220 10:54:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:16:23.220 10:54:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:23.220 10:54:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:23.220 rmmod nvme_tcp 00:16:23.220 rmmod nvme_fabrics 00:16:23.220 rmmod nvme_keyring 00:16:23.220 10:54:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:23.220 10:54:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:16:23.220 10:54:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:16:23.220 10:54:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 77094 ']' 00:16:23.220 10:54:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 77094 00:16:23.220 10:54:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 77094 ']' 00:16:23.220 10:54:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 77094 00:16:23.220 10:54:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:16:23.220 10:54:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:23.220 10:54:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77094 00:16:23.220 killing process with pid 77094 00:16:23.220 10:54:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:23.220 10:54:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:23.220 10:54:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77094' 00:16:23.220 10:54:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 77094 00:16:23.220 10:54:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 77094 00:16:23.478 10:54:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:23.478 10:54:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:23.478 10:54:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:23.478 10:54:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:23.478 10:54:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:23.478 10:54:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.478 10:54:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:23.478 10:54:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.737 10:54:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:23.737 00:16:23.737 real 0m14.426s 00:16:23.737 user 0m24.958s 00:16:23.737 sys 0m2.525s 00:16:23.737 10:54:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:23.737 10:54:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:23.737 ************************************ 00:16:23.737 END TEST nvmf_discovery_remove_ifc 00:16:23.737 ************************************ 00:16:23.737 10:54:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:23.737 10:54:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:23.737 10:54:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:23.737 10:54:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.737 ************************************ 00:16:23.737 START TEST nvmf_identify_kernel_target 00:16:23.737 ************************************ 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:23.738 * Looking for test storage... 00:16:23.738 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=bb4b8bd3-cfb4-4368-bf29-91254747069c 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:23.738 Cannot find device "nvmf_tgt_br" 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:23.738 Cannot find device "nvmf_tgt_br2" 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:23.738 Cannot find device "nvmf_tgt_br" 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:23.738 Cannot find device "nvmf_tgt_br2" 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:16:23.738 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:23.997 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:23.997 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:23.997 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:23.997 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:16:23.997 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:23.997 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:23.997 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:16:23.997 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:23.997 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:23.997 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:23.997 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:23.997 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:23.997 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:23.997 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:23.997 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:23.997 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:23.997 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:23.997 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:23.997 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:23.997 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:23.998 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:23.998 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:23.998 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:23.998 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:23.998 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:23.998 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:23.998 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:23.998 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:23.998 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:23.998 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:23.998 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:23.998 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:23.998 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:16:23.998 00:16:23.998 --- 10.0.0.2 ping statistics --- 00:16:23.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.998 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:16:23.998 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:23.998 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:23.998 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:16:23.998 00:16:23.998 --- 10.0.0.3 ping statistics --- 00:16:23.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.998 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:16:23.998 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:24.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:24.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:16:24.257 00:16:24.257 --- 10.0.0.1 ping statistics --- 00:16:24.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.257 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:16:24.257 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:24.257 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:16:24.257 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:24.257 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:24.257 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:24.257 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:24.257 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:24.257 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:24.257 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:24.257 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:16:24.257 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:16:24.257 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:16:24.257 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:24.257 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:24.257 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:24.257 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:24.257 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:24.257 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:24.257 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:24.257 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:24.257 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:24.257 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:16:24.257 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:16:24.257 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:16:24.257 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:16:24.257 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:24.257 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:24.257 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:16:24.257 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:16:24.257 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:16:24.257 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:16:24.257 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:16:24.257 10:54:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:24.516 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:24.516 Waiting for block devices as requested 00:16:24.516 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:24.774 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:24.774 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:24.774 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:24.774 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:16:24.774 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:16:24.774 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:24.774 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:24.774 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:16:24.774 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:16:24.774 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:24.774 No valid GPT data, bailing 00:16:24.774 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:24.774 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:24.774 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:24.774 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:16:24.774 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:24.774 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:24.774 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:16:24.774 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:16:24.774 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:24.774 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:24.774 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:16:24.774 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:16:24.774 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:25.032 No valid GPT data, bailing 00:16:25.032 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:25.032 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:25.032 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:25.032 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:16:25.032 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:25.032 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:25.032 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:16:25.032 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:16:25.032 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:25.032 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:25.032 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:16:25.032 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:16:25.032 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:25.032 No valid GPT data, bailing 00:16:25.032 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:25.032 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:25.032 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:25.032 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:16:25.032 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:25.032 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:25.032 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:16:25.032 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:16:25.032 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:25.032 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:25.032 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:16:25.032 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:16:25.032 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:25.032 No valid GPT data, bailing 00:16:25.032 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:25.033 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:25.033 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:25.033 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:16:25.033 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:16:25.033 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:25.033 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:25.033 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:16:25.033 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:16:25.033 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:16:25.033 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:16:25.033 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:16:25.033 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:16:25.033 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:16:25.033 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:16:25.033 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:16:25.033 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:16:25.291 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid=bb4b8bd3-cfb4-4368-bf29-91254747069c -a 10.0.0.1 -t tcp -s 4420 00:16:25.291 00:16:25.291 Discovery Log Number of Records 2, Generation counter 2 00:16:25.291 =====Discovery Log Entry 0====== 00:16:25.291 trtype: tcp 00:16:25.291 adrfam: ipv4 00:16:25.291 subtype: current discovery subsystem 00:16:25.291 treq: not specified, sq flow control disable supported 00:16:25.291 portid: 1 00:16:25.291 trsvcid: 4420 00:16:25.291 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:25.291 traddr: 10.0.0.1 00:16:25.291 eflags: none 00:16:25.291 sectype: none 00:16:25.291 =====Discovery Log Entry 1====== 00:16:25.291 trtype: tcp 00:16:25.291 adrfam: ipv4 00:16:25.291 subtype: nvme subsystem 00:16:25.291 treq: not specified, sq flow control disable supported 00:16:25.291 portid: 1 00:16:25.291 trsvcid: 4420 00:16:25.291 subnqn: nqn.2016-06.io.spdk:testnqn 00:16:25.291 traddr: 10.0.0.1 00:16:25.291 eflags: none 00:16:25.291 sectype: none 00:16:25.291 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:16:25.291 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:16:25.291 ===================================================== 00:16:25.291 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:16:25.291 ===================================================== 00:16:25.291 Controller Capabilities/Features 00:16:25.291 ================================ 00:16:25.291 Vendor ID: 0000 00:16:25.291 Subsystem Vendor ID: 0000 00:16:25.291 Serial Number: b0a5aa4e60a3479f6e61 00:16:25.291 Model Number: Linux 00:16:25.291 Firmware Version: 6.7.0-68 00:16:25.291 Recommended Arb Burst: 0 00:16:25.291 IEEE OUI Identifier: 00 00 00 00:16:25.291 Multi-path I/O 00:16:25.291 May have multiple subsystem ports: No 00:16:25.291 May have multiple controllers: No 00:16:25.291 Associated with SR-IOV VF: No 00:16:25.291 Max Data Transfer Size: Unlimited 00:16:25.291 Max Number of Namespaces: 0 00:16:25.291 Max Number of I/O Queues: 1024 00:16:25.291 NVMe Specification Version (VS): 1.3 00:16:25.291 NVMe Specification Version (Identify): 1.3 00:16:25.291 Maximum Queue Entries: 1024 00:16:25.291 Contiguous Queues Required: No 00:16:25.291 Arbitration Mechanisms Supported 00:16:25.291 Weighted Round Robin: Not Supported 00:16:25.291 Vendor Specific: Not Supported 00:16:25.291 Reset Timeout: 7500 ms 00:16:25.291 Doorbell Stride: 4 bytes 00:16:25.291 NVM Subsystem Reset: Not Supported 00:16:25.291 Command Sets Supported 00:16:25.291 NVM Command Set: Supported 00:16:25.291 Boot Partition: Not Supported 00:16:25.291 Memory Page Size Minimum: 4096 bytes 00:16:25.291 Memory Page Size Maximum: 4096 bytes 00:16:25.291 Persistent Memory Region: Not Supported 00:16:25.291 Optional Asynchronous Events Supported 00:16:25.291 Namespace Attribute Notices: Not Supported 00:16:25.291 Firmware Activation Notices: Not Supported 00:16:25.291 ANA Change Notices: Not Supported 00:16:25.291 PLE Aggregate Log Change Notices: Not Supported 00:16:25.291 LBA Status Info Alert Notices: Not Supported 00:16:25.291 EGE Aggregate Log Change Notices: Not Supported 00:16:25.291 Normal NVM Subsystem Shutdown event: Not Supported 00:16:25.291 Zone Descriptor Change Notices: Not Supported 00:16:25.291 Discovery Log Change Notices: Supported 00:16:25.291 Controller Attributes 00:16:25.291 128-bit Host Identifier: Not Supported 00:16:25.291 Non-Operational Permissive Mode: Not Supported 00:16:25.291 NVM Sets: Not Supported 00:16:25.291 Read Recovery Levels: Not Supported 00:16:25.291 Endurance Groups: Not Supported 00:16:25.291 Predictable Latency Mode: Not Supported 00:16:25.291 Traffic Based Keep ALive: Not Supported 00:16:25.291 Namespace Granularity: Not Supported 00:16:25.291 SQ Associations: Not Supported 00:16:25.291 UUID List: Not Supported 00:16:25.291 Multi-Domain Subsystem: Not Supported 00:16:25.291 Fixed Capacity Management: Not Supported 00:16:25.291 Variable Capacity Management: Not Supported 00:16:25.291 Delete Endurance Group: Not Supported 00:16:25.291 Delete NVM Set: Not Supported 00:16:25.291 Extended LBA Formats Supported: Not Supported 00:16:25.291 Flexible Data Placement Supported: Not Supported 00:16:25.291 00:16:25.291 Controller Memory Buffer Support 00:16:25.291 ================================ 00:16:25.291 Supported: No 00:16:25.291 00:16:25.291 Persistent Memory Region Support 00:16:25.291 ================================ 00:16:25.291 Supported: No 00:16:25.291 00:16:25.291 Admin Command Set Attributes 00:16:25.291 ============================ 00:16:25.291 Security Send/Receive: Not Supported 00:16:25.291 Format NVM: Not Supported 00:16:25.291 Firmware Activate/Download: Not Supported 00:16:25.291 Namespace Management: Not Supported 00:16:25.291 Device Self-Test: Not Supported 00:16:25.291 Directives: Not Supported 00:16:25.291 NVMe-MI: Not Supported 00:16:25.291 Virtualization Management: Not Supported 00:16:25.291 Doorbell Buffer Config: Not Supported 00:16:25.291 Get LBA Status Capability: Not Supported 00:16:25.291 Command & Feature Lockdown Capability: Not Supported 00:16:25.291 Abort Command Limit: 1 00:16:25.291 Async Event Request Limit: 1 00:16:25.291 Number of Firmware Slots: N/A 00:16:25.291 Firmware Slot 1 Read-Only: N/A 00:16:25.291 Firmware Activation Without Reset: N/A 00:16:25.291 Multiple Update Detection Support: N/A 00:16:25.291 Firmware Update Granularity: No Information Provided 00:16:25.291 Per-Namespace SMART Log: No 00:16:25.291 Asymmetric Namespace Access Log Page: Not Supported 00:16:25.291 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:16:25.291 Command Effects Log Page: Not Supported 00:16:25.291 Get Log Page Extended Data: Supported 00:16:25.291 Telemetry Log Pages: Not Supported 00:16:25.291 Persistent Event Log Pages: Not Supported 00:16:25.291 Supported Log Pages Log Page: May Support 00:16:25.291 Commands Supported & Effects Log Page: Not Supported 00:16:25.291 Feature Identifiers & Effects Log Page:May Support 00:16:25.291 NVMe-MI Commands & Effects Log Page: May Support 00:16:25.291 Data Area 4 for Telemetry Log: Not Supported 00:16:25.291 Error Log Page Entries Supported: 1 00:16:25.291 Keep Alive: Not Supported 00:16:25.291 00:16:25.291 NVM Command Set Attributes 00:16:25.291 ========================== 00:16:25.291 Submission Queue Entry Size 00:16:25.291 Max: 1 00:16:25.291 Min: 1 00:16:25.291 Completion Queue Entry Size 00:16:25.291 Max: 1 00:16:25.291 Min: 1 00:16:25.291 Number of Namespaces: 0 00:16:25.291 Compare Command: Not Supported 00:16:25.291 Write Uncorrectable Command: Not Supported 00:16:25.291 Dataset Management Command: Not Supported 00:16:25.291 Write Zeroes Command: Not Supported 00:16:25.291 Set Features Save Field: Not Supported 00:16:25.291 Reservations: Not Supported 00:16:25.291 Timestamp: Not Supported 00:16:25.291 Copy: Not Supported 00:16:25.291 Volatile Write Cache: Not Present 00:16:25.291 Atomic Write Unit (Normal): 1 00:16:25.291 Atomic Write Unit (PFail): 1 00:16:25.291 Atomic Compare & Write Unit: 1 00:16:25.291 Fused Compare & Write: Not Supported 00:16:25.291 Scatter-Gather List 00:16:25.291 SGL Command Set: Supported 00:16:25.291 SGL Keyed: Not Supported 00:16:25.291 SGL Bit Bucket Descriptor: Not Supported 00:16:25.291 SGL Metadata Pointer: Not Supported 00:16:25.291 Oversized SGL: Not Supported 00:16:25.291 SGL Metadata Address: Not Supported 00:16:25.291 SGL Offset: Supported 00:16:25.291 Transport SGL Data Block: Not Supported 00:16:25.292 Replay Protected Memory Block: Not Supported 00:16:25.292 00:16:25.292 Firmware Slot Information 00:16:25.292 ========================= 00:16:25.292 Active slot: 0 00:16:25.292 00:16:25.292 00:16:25.292 Error Log 00:16:25.292 ========= 00:16:25.292 00:16:25.292 Active Namespaces 00:16:25.292 ================= 00:16:25.292 Discovery Log Page 00:16:25.292 ================== 00:16:25.292 Generation Counter: 2 00:16:25.292 Number of Records: 2 00:16:25.292 Record Format: 0 00:16:25.292 00:16:25.292 Discovery Log Entry 0 00:16:25.292 ---------------------- 00:16:25.292 Transport Type: 3 (TCP) 00:16:25.292 Address Family: 1 (IPv4) 00:16:25.292 Subsystem Type: 3 (Current Discovery Subsystem) 00:16:25.292 Entry Flags: 00:16:25.292 Duplicate Returned Information: 0 00:16:25.292 Explicit Persistent Connection Support for Discovery: 0 00:16:25.292 Transport Requirements: 00:16:25.292 Secure Channel: Not Specified 00:16:25.292 Port ID: 1 (0x0001) 00:16:25.292 Controller ID: 65535 (0xffff) 00:16:25.292 Admin Max SQ Size: 32 00:16:25.292 Transport Service Identifier: 4420 00:16:25.292 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:16:25.292 Transport Address: 10.0.0.1 00:16:25.292 Discovery Log Entry 1 00:16:25.292 ---------------------- 00:16:25.292 Transport Type: 3 (TCP) 00:16:25.292 Address Family: 1 (IPv4) 00:16:25.292 Subsystem Type: 2 (NVM Subsystem) 00:16:25.292 Entry Flags: 00:16:25.292 Duplicate Returned Information: 0 00:16:25.292 Explicit Persistent Connection Support for Discovery: 0 00:16:25.292 Transport Requirements: 00:16:25.292 Secure Channel: Not Specified 00:16:25.292 Port ID: 1 (0x0001) 00:16:25.292 Controller ID: 65535 (0xffff) 00:16:25.292 Admin Max SQ Size: 32 00:16:25.292 Transport Service Identifier: 4420 00:16:25.292 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:16:25.292 Transport Address: 10.0.0.1 00:16:25.292 10:54:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:16:25.551 get_feature(0x01) failed 00:16:25.551 get_feature(0x02) failed 00:16:25.551 get_feature(0x04) failed 00:16:25.551 ===================================================== 00:16:25.551 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:16:25.551 ===================================================== 00:16:25.551 Controller Capabilities/Features 00:16:25.551 ================================ 00:16:25.551 Vendor ID: 0000 00:16:25.551 Subsystem Vendor ID: 0000 00:16:25.551 Serial Number: b26346c504c3cb8ba2bd 00:16:25.551 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:16:25.551 Firmware Version: 6.7.0-68 00:16:25.551 Recommended Arb Burst: 6 00:16:25.551 IEEE OUI Identifier: 00 00 00 00:16:25.551 Multi-path I/O 00:16:25.551 May have multiple subsystem ports: Yes 00:16:25.551 May have multiple controllers: Yes 00:16:25.551 Associated with SR-IOV VF: No 00:16:25.551 Max Data Transfer Size: Unlimited 00:16:25.551 Max Number of Namespaces: 1024 00:16:25.551 Max Number of I/O Queues: 128 00:16:25.551 NVMe Specification Version (VS): 1.3 00:16:25.551 NVMe Specification Version (Identify): 1.3 00:16:25.551 Maximum Queue Entries: 1024 00:16:25.551 Contiguous Queues Required: No 00:16:25.551 Arbitration Mechanisms Supported 00:16:25.551 Weighted Round Robin: Not Supported 00:16:25.551 Vendor Specific: Not Supported 00:16:25.551 Reset Timeout: 7500 ms 00:16:25.551 Doorbell Stride: 4 bytes 00:16:25.551 NVM Subsystem Reset: Not Supported 00:16:25.551 Command Sets Supported 00:16:25.551 NVM Command Set: Supported 00:16:25.551 Boot Partition: Not Supported 00:16:25.551 Memory Page Size Minimum: 4096 bytes 00:16:25.551 Memory Page Size Maximum: 4096 bytes 00:16:25.551 Persistent Memory Region: Not Supported 00:16:25.551 Optional Asynchronous Events Supported 00:16:25.551 Namespace Attribute Notices: Supported 00:16:25.551 Firmware Activation Notices: Not Supported 00:16:25.551 ANA Change Notices: Supported 00:16:25.551 PLE Aggregate Log Change Notices: Not Supported 00:16:25.551 LBA Status Info Alert Notices: Not Supported 00:16:25.551 EGE Aggregate Log Change Notices: Not Supported 00:16:25.551 Normal NVM Subsystem Shutdown event: Not Supported 00:16:25.551 Zone Descriptor Change Notices: Not Supported 00:16:25.551 Discovery Log Change Notices: Not Supported 00:16:25.551 Controller Attributes 00:16:25.551 128-bit Host Identifier: Supported 00:16:25.551 Non-Operational Permissive Mode: Not Supported 00:16:25.551 NVM Sets: Not Supported 00:16:25.551 Read Recovery Levels: Not Supported 00:16:25.551 Endurance Groups: Not Supported 00:16:25.551 Predictable Latency Mode: Not Supported 00:16:25.551 Traffic Based Keep ALive: Supported 00:16:25.551 Namespace Granularity: Not Supported 00:16:25.551 SQ Associations: Not Supported 00:16:25.551 UUID List: Not Supported 00:16:25.551 Multi-Domain Subsystem: Not Supported 00:16:25.551 Fixed Capacity Management: Not Supported 00:16:25.551 Variable Capacity Management: Not Supported 00:16:25.551 Delete Endurance Group: Not Supported 00:16:25.551 Delete NVM Set: Not Supported 00:16:25.551 Extended LBA Formats Supported: Not Supported 00:16:25.551 Flexible Data Placement Supported: Not Supported 00:16:25.551 00:16:25.551 Controller Memory Buffer Support 00:16:25.551 ================================ 00:16:25.551 Supported: No 00:16:25.551 00:16:25.551 Persistent Memory Region Support 00:16:25.551 ================================ 00:16:25.551 Supported: No 00:16:25.551 00:16:25.551 Admin Command Set Attributes 00:16:25.551 ============================ 00:16:25.551 Security Send/Receive: Not Supported 00:16:25.551 Format NVM: Not Supported 00:16:25.551 Firmware Activate/Download: Not Supported 00:16:25.551 Namespace Management: Not Supported 00:16:25.551 Device Self-Test: Not Supported 00:16:25.551 Directives: Not Supported 00:16:25.551 NVMe-MI: Not Supported 00:16:25.551 Virtualization Management: Not Supported 00:16:25.551 Doorbell Buffer Config: Not Supported 00:16:25.551 Get LBA Status Capability: Not Supported 00:16:25.551 Command & Feature Lockdown Capability: Not Supported 00:16:25.551 Abort Command Limit: 4 00:16:25.551 Async Event Request Limit: 4 00:16:25.551 Number of Firmware Slots: N/A 00:16:25.551 Firmware Slot 1 Read-Only: N/A 00:16:25.551 Firmware Activation Without Reset: N/A 00:16:25.551 Multiple Update Detection Support: N/A 00:16:25.551 Firmware Update Granularity: No Information Provided 00:16:25.551 Per-Namespace SMART Log: Yes 00:16:25.551 Asymmetric Namespace Access Log Page: Supported 00:16:25.551 ANA Transition Time : 10 sec 00:16:25.551 00:16:25.551 Asymmetric Namespace Access Capabilities 00:16:25.552 ANA Optimized State : Supported 00:16:25.552 ANA Non-Optimized State : Supported 00:16:25.552 ANA Inaccessible State : Supported 00:16:25.552 ANA Persistent Loss State : Supported 00:16:25.552 ANA Change State : Supported 00:16:25.552 ANAGRPID is not changed : No 00:16:25.552 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:16:25.552 00:16:25.552 ANA Group Identifier Maximum : 128 00:16:25.552 Number of ANA Group Identifiers : 128 00:16:25.552 Max Number of Allowed Namespaces : 1024 00:16:25.552 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:16:25.552 Command Effects Log Page: Supported 00:16:25.552 Get Log Page Extended Data: Supported 00:16:25.552 Telemetry Log Pages: Not Supported 00:16:25.552 Persistent Event Log Pages: Not Supported 00:16:25.552 Supported Log Pages Log Page: May Support 00:16:25.552 Commands Supported & Effects Log Page: Not Supported 00:16:25.552 Feature Identifiers & Effects Log Page:May Support 00:16:25.552 NVMe-MI Commands & Effects Log Page: May Support 00:16:25.552 Data Area 4 for Telemetry Log: Not Supported 00:16:25.552 Error Log Page Entries Supported: 128 00:16:25.552 Keep Alive: Supported 00:16:25.552 Keep Alive Granularity: 1000 ms 00:16:25.552 00:16:25.552 NVM Command Set Attributes 00:16:25.552 ========================== 00:16:25.552 Submission Queue Entry Size 00:16:25.552 Max: 64 00:16:25.552 Min: 64 00:16:25.552 Completion Queue Entry Size 00:16:25.552 Max: 16 00:16:25.552 Min: 16 00:16:25.552 Number of Namespaces: 1024 00:16:25.552 Compare Command: Not Supported 00:16:25.552 Write Uncorrectable Command: Not Supported 00:16:25.552 Dataset Management Command: Supported 00:16:25.552 Write Zeroes Command: Supported 00:16:25.552 Set Features Save Field: Not Supported 00:16:25.552 Reservations: Not Supported 00:16:25.552 Timestamp: Not Supported 00:16:25.552 Copy: Not Supported 00:16:25.552 Volatile Write Cache: Present 00:16:25.552 Atomic Write Unit (Normal): 1 00:16:25.552 Atomic Write Unit (PFail): 1 00:16:25.552 Atomic Compare & Write Unit: 1 00:16:25.552 Fused Compare & Write: Not Supported 00:16:25.552 Scatter-Gather List 00:16:25.552 SGL Command Set: Supported 00:16:25.552 SGL Keyed: Not Supported 00:16:25.552 SGL Bit Bucket Descriptor: Not Supported 00:16:25.552 SGL Metadata Pointer: Not Supported 00:16:25.552 Oversized SGL: Not Supported 00:16:25.552 SGL Metadata Address: Not Supported 00:16:25.552 SGL Offset: Supported 00:16:25.552 Transport SGL Data Block: Not Supported 00:16:25.552 Replay Protected Memory Block: Not Supported 00:16:25.552 00:16:25.552 Firmware Slot Information 00:16:25.552 ========================= 00:16:25.552 Active slot: 0 00:16:25.552 00:16:25.552 Asymmetric Namespace Access 00:16:25.552 =========================== 00:16:25.552 Change Count : 0 00:16:25.552 Number of ANA Group Descriptors : 1 00:16:25.552 ANA Group Descriptor : 0 00:16:25.552 ANA Group ID : 1 00:16:25.552 Number of NSID Values : 1 00:16:25.552 Change Count : 0 00:16:25.552 ANA State : 1 00:16:25.552 Namespace Identifier : 1 00:16:25.552 00:16:25.552 Commands Supported and Effects 00:16:25.552 ============================== 00:16:25.552 Admin Commands 00:16:25.552 -------------- 00:16:25.552 Get Log Page (02h): Supported 00:16:25.552 Identify (06h): Supported 00:16:25.552 Abort (08h): Supported 00:16:25.552 Set Features (09h): Supported 00:16:25.552 Get Features (0Ah): Supported 00:16:25.552 Asynchronous Event Request (0Ch): Supported 00:16:25.552 Keep Alive (18h): Supported 00:16:25.552 I/O Commands 00:16:25.552 ------------ 00:16:25.552 Flush (00h): Supported 00:16:25.552 Write (01h): Supported LBA-Change 00:16:25.552 Read (02h): Supported 00:16:25.552 Write Zeroes (08h): Supported LBA-Change 00:16:25.552 Dataset Management (09h): Supported 00:16:25.552 00:16:25.552 Error Log 00:16:25.552 ========= 00:16:25.552 Entry: 0 00:16:25.552 Error Count: 0x3 00:16:25.552 Submission Queue Id: 0x0 00:16:25.552 Command Id: 0x5 00:16:25.552 Phase Bit: 0 00:16:25.552 Status Code: 0x2 00:16:25.552 Status Code Type: 0x0 00:16:25.552 Do Not Retry: 1 00:16:25.552 Error Location: 0x28 00:16:25.552 LBA: 0x0 00:16:25.552 Namespace: 0x0 00:16:25.552 Vendor Log Page: 0x0 00:16:25.552 ----------- 00:16:25.552 Entry: 1 00:16:25.552 Error Count: 0x2 00:16:25.552 Submission Queue Id: 0x0 00:16:25.552 Command Id: 0x5 00:16:25.552 Phase Bit: 0 00:16:25.552 Status Code: 0x2 00:16:25.552 Status Code Type: 0x0 00:16:25.552 Do Not Retry: 1 00:16:25.552 Error Location: 0x28 00:16:25.552 LBA: 0x0 00:16:25.552 Namespace: 0x0 00:16:25.552 Vendor Log Page: 0x0 00:16:25.552 ----------- 00:16:25.552 Entry: 2 00:16:25.552 Error Count: 0x1 00:16:25.552 Submission Queue Id: 0x0 00:16:25.552 Command Id: 0x4 00:16:25.552 Phase Bit: 0 00:16:25.552 Status Code: 0x2 00:16:25.552 Status Code Type: 0x0 00:16:25.552 Do Not Retry: 1 00:16:25.552 Error Location: 0x28 00:16:25.552 LBA: 0x0 00:16:25.552 Namespace: 0x0 00:16:25.552 Vendor Log Page: 0x0 00:16:25.552 00:16:25.552 Number of Queues 00:16:25.552 ================ 00:16:25.552 Number of I/O Submission Queues: 128 00:16:25.552 Number of I/O Completion Queues: 128 00:16:25.552 00:16:25.552 ZNS Specific Controller Data 00:16:25.552 ============================ 00:16:25.552 Zone Append Size Limit: 0 00:16:25.552 00:16:25.552 00:16:25.552 Active Namespaces 00:16:25.552 ================= 00:16:25.552 get_feature(0x05) failed 00:16:25.552 Namespace ID:1 00:16:25.552 Command Set Identifier: NVM (00h) 00:16:25.552 Deallocate: Supported 00:16:25.552 Deallocated/Unwritten Error: Not Supported 00:16:25.552 Deallocated Read Value: Unknown 00:16:25.552 Deallocate in Write Zeroes: Not Supported 00:16:25.552 Deallocated Guard Field: 0xFFFF 00:16:25.552 Flush: Supported 00:16:25.552 Reservation: Not Supported 00:16:25.552 Namespace Sharing Capabilities: Multiple Controllers 00:16:25.552 Size (in LBAs): 1310720 (5GiB) 00:16:25.552 Capacity (in LBAs): 1310720 (5GiB) 00:16:25.552 Utilization (in LBAs): 1310720 (5GiB) 00:16:25.552 UUID: 05e57b68-9b57-4fea-9b41-2e3bedd7c5fb 00:16:25.552 Thin Provisioning: Not Supported 00:16:25.552 Per-NS Atomic Units: Yes 00:16:25.552 Atomic Boundary Size (Normal): 0 00:16:25.552 Atomic Boundary Size (PFail): 0 00:16:25.552 Atomic Boundary Offset: 0 00:16:25.552 NGUID/EUI64 Never Reused: No 00:16:25.552 ANA group ID: 1 00:16:25.552 Namespace Write Protected: No 00:16:25.552 Number of LBA Formats: 1 00:16:25.552 Current LBA Format: LBA Format #00 00:16:25.552 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:16:25.552 00:16:25.552 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:16:25.552 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:25.552 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:16:25.552 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:25.552 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:16:25.552 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:25.552 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:25.552 rmmod nvme_tcp 00:16:25.552 rmmod nvme_fabrics 00:16:25.552 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:25.552 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:16:25.552 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:16:25.552 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:25.552 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:25.552 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:25.552 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:25.552 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:25.552 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:25.552 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.552 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:25.552 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.810 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:25.810 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:16:25.810 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:16:25.810 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:16:25.810 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:25.810 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:25.810 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:16:25.811 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:25.811 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:16:25.811 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:16:25.811 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:26.375 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:26.633 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:26.633 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:26.633 00:16:26.633 real 0m3.030s 00:16:26.633 user 0m1.029s 00:16:26.633 sys 0m1.435s 00:16:26.633 ************************************ 00:16:26.633 END TEST nvmf_identify_kernel_target 00:16:26.633 ************************************ 00:16:26.633 10:54:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:26.633 10:54:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.633 10:54:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:16:26.633 10:54:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:26.633 10:54:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:26.633 10:54:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:26.633 ************************************ 00:16:26.633 START TEST nvmf_auth_host 00:16:26.633 ************************************ 00:16:26.633 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:16:26.892 * Looking for test storage... 00:16:26.892 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=bb4b8bd3-cfb4-4368-bf29-91254747069c 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:26.892 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:26.893 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:26.893 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.893 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:26.893 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.893 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:26.893 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:26.893 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:26.893 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:26.893 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:26.893 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:26.893 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:26.893 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:26.893 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:26.893 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:26.893 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:26.893 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:26.893 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:26.893 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:26.893 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:26.893 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:26.893 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:26.893 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:26.893 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:26.893 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:26.893 Cannot find device "nvmf_tgt_br" 00:16:26.893 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:16:26.893 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:26.893 Cannot find device "nvmf_tgt_br2" 00:16:26.893 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:16:26.893 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:26.893 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:26.893 Cannot find device "nvmf_tgt_br" 00:16:26.893 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:16:26.893 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:26.893 Cannot find device "nvmf_tgt_br2" 00:16:26.893 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:16:26.893 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:26.893 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:26.893 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:26.893 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:26.893 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:16:26.893 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:26.893 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:26.893 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:16:26.893 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:27.151 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:27.151 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:27.151 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:27.151 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:27.151 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:27.151 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:27.151 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:27.151 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:27.151 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:27.151 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:27.151 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:27.151 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:27.151 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:27.151 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:27.151 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:27.151 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:27.151 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:27.151 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:27.151 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:27.151 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:27.151 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:27.151 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:27.151 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:27.151 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:27.151 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:16:27.151 00:16:27.151 --- 10.0.0.2 ping statistics --- 00:16:27.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.151 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:16:27.151 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:27.151 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:27.151 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:16:27.151 00:16:27.151 --- 10.0.0.3 ping statistics --- 00:16:27.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.151 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:16:27.151 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:27.151 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:27.151 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:16:27.151 00:16:27.151 --- 10.0.0.1 ping statistics --- 00:16:27.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.151 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:16:27.151 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:27.151 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:16:27.151 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:27.151 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:27.151 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:27.151 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:27.151 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:27.151 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:27.151 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:27.151 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:16:27.151 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:27.151 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:27.151 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.151 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=78012 00:16:27.151 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 78012 00:16:27.151 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:16:27.151 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 78012 ']' 00:16:27.152 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.152 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:27.152 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.152 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:27.152 10:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.528 10:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:28.528 10:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:16:28.529 10:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:28.529 10:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:28.529 10:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.529 10:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:28.529 10:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:16:28.529 10:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:16:28.529 10:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:28.529 10:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:28.529 10:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:28.529 10:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:16:28.529 10:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:16:28.529 10:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:28.529 10:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=513dbada37d2400eaa10086a10a51b78 00:16:28.529 10:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:28.529 10:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Z2I 00:16:28.529 10:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 513dbada37d2400eaa10086a10a51b78 0 00:16:28.529 10:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 513dbada37d2400eaa10086a10a51b78 0 00:16:28.529 10:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:28.529 10:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:28.529 10:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=513dbada37d2400eaa10086a10a51b78 00:16:28.529 10:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:16:28.529 10:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Z2I 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Z2I 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Z2I 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c046af19d68257155c5e9af00a3f5422106a678c984ab4ed16146832bc981904 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.o1u 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c046af19d68257155c5e9af00a3f5422106a678c984ab4ed16146832bc981904 3 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c046af19d68257155c5e9af00a3f5422106a678c984ab4ed16146832bc981904 3 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c046af19d68257155c5e9af00a3f5422106a678c984ab4ed16146832bc981904 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.o1u 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.o1u 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.o1u 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=89e784a3f733d65948864b4c9a1dbe042759831fd5c2969c 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Bm6 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 89e784a3f733d65948864b4c9a1dbe042759831fd5c2969c 0 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 89e784a3f733d65948864b4c9a1dbe042759831fd5c2969c 0 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=89e784a3f733d65948864b4c9a1dbe042759831fd5c2969c 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Bm6 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Bm6 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Bm6 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cc8d4a7c0507f67a9d1b3f6b31cd070f0f5296614ece70c8 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.6R8 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cc8d4a7c0507f67a9d1b3f6b31cd070f0f5296614ece70c8 2 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cc8d4a7c0507f67a9d1b3f6b31cd070f0f5296614ece70c8 2 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cc8d4a7c0507f67a9d1b3f6b31cd070f0f5296614ece70c8 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.6R8 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.6R8 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.6R8 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7d6975d6215b894c73f97840f61f0251 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.jrv 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7d6975d6215b894c73f97840f61f0251 1 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7d6975d6215b894c73f97840f61f0251 1 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7d6975d6215b894c73f97840f61f0251 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:16:28.529 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:28.788 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.jrv 00:16:28.788 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.jrv 00:16:28.788 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.jrv 00:16:28.788 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b17bd2106fbeeab995e0de5d65a4dfb1 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.DeD 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b17bd2106fbeeab995e0de5d65a4dfb1 1 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b17bd2106fbeeab995e0de5d65a4dfb1 1 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b17bd2106fbeeab995e0de5d65a4dfb1 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.DeD 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.DeD 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.DeD 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=284a49a45896ab741d2afaf7a0ea44417986d8f595a50e14 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.sWG 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 284a49a45896ab741d2afaf7a0ea44417986d8f595a50e14 2 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 284a49a45896ab741d2afaf7a0ea44417986d8f595a50e14 2 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=284a49a45896ab741d2afaf7a0ea44417986d8f595a50e14 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.sWG 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.sWG 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.sWG 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a6500c1fbe9742c88944d696f15e4d47 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.5Cv 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a6500c1fbe9742c88944d696f15e4d47 0 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a6500c1fbe9742c88944d696f15e4d47 0 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a6500c1fbe9742c88944d696f15e4d47 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.5Cv 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.5Cv 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.5Cv 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=807c0b2abea67dadcac9b412dd675c367947b5d7946dba1425409bb267695e74 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.06Q 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 807c0b2abea67dadcac9b412dd675c367947b5d7946dba1425409bb267695e74 3 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 807c0b2abea67dadcac9b412dd675c367947b5d7946dba1425409bb267695e74 3 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=807c0b2abea67dadcac9b412dd675c367947b5d7946dba1425409bb267695e74 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:16:28.789 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:29.047 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.06Q 00:16:29.047 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.06Q 00:16:29.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.047 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.06Q 00:16:29.047 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:16:29.047 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78012 00:16:29.047 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 78012 ']' 00:16:29.047 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.047 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:29.047 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.047 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:29.047 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Z2I 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.o1u ]] 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.o1u 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Bm6 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.6R8 ]] 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.6R8 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.jrv 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.DeD ]] 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.DeD 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.sWG 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.5Cv ]] 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.5Cv 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.06Q 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:29.306 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:29.307 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:29.307 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:29.307 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:16:29.307 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:16:29.307 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:16:29.307 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:29.307 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:29.307 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:16:29.307 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:16:29.307 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:16:29.307 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:16:29.307 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:16:29.307 10:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:29.565 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:29.822 Waiting for block devices as requested 00:16:29.822 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:29.822 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:30.388 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:30.388 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:30.388 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:16:30.388 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:16:30.389 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:30.389 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:30.389 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:16:30.389 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:16:30.389 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:30.389 No valid GPT data, bailing 00:16:30.389 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:30.389 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:16:30.389 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:16:30.389 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:16:30.389 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:30.389 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:30.389 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:16:30.389 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:16:30.389 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:30.389 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:30.389 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:16:30.389 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:16:30.389 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:30.647 No valid GPT data, bailing 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:30.647 No valid GPT data, bailing 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:30.647 No valid GPT data, bailing 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid=bb4b8bd3-cfb4-4368-bf29-91254747069c -a 10.0.0.1 -t tcp -s 4420 00:16:30.647 00:16:30.647 Discovery Log Number of Records 2, Generation counter 2 00:16:30.647 =====Discovery Log Entry 0====== 00:16:30.647 trtype: tcp 00:16:30.647 adrfam: ipv4 00:16:30.647 subtype: current discovery subsystem 00:16:30.647 treq: not specified, sq flow control disable supported 00:16:30.647 portid: 1 00:16:30.647 trsvcid: 4420 00:16:30.647 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:30.647 traddr: 10.0.0.1 00:16:30.647 eflags: none 00:16:30.647 sectype: none 00:16:30.647 =====Discovery Log Entry 1====== 00:16:30.647 trtype: tcp 00:16:30.647 adrfam: ipv4 00:16:30.647 subtype: nvme subsystem 00:16:30.647 treq: not specified, sq flow control disable supported 00:16:30.647 portid: 1 00:16:30.647 trsvcid: 4420 00:16:30.647 subnqn: nqn.2024-02.io.spdk:cnode0 00:16:30.647 traddr: 10.0.0.1 00:16:30.647 eflags: none 00:16:30.647 sectype: none 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODllNzg0YTNmNzMzZDY1OTQ4ODY0YjRjOWExZGJlMDQyNzU5ODMxZmQ1YzI5Njlj2LsPoQ==: 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:30.647 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:30.905 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODllNzg0YTNmNzMzZDY1OTQ4ODY0YjRjOWExZGJlMDQyNzU5ODMxZmQ1YzI5Njlj2LsPoQ==: 00:16:30.905 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: ]] 00:16:30.905 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: 00:16:30.905 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:16:30.905 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:16:30.905 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:16:30.905 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:30.905 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:16:30.905 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:30.905 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:16:30.905 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:30.905 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:30.905 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:30.905 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:30.905 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.905 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.905 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.905 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:30.905 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:30.905 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:30.905 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:30.905 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:30.905 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:30.905 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:30.905 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:30.905 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:30.905 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:30.905 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:30.906 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.906 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.906 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.164 nvme0n1 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTEzZGJhZGEzN2QyNDAwZWFhMTAwODZhMTBhNTFiNziQxU9m: 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA0NmFmMTlkNjgyNTcxNTVjNWU5YWYwMGEzZjU0MjIxMDZhNjc4Yzk4NGFiNGVkMTYxNDY4MzJiYzk4MTkwNGz4La8=: 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTEzZGJhZGEzN2QyNDAwZWFhMTAwODZhMTBhNTFiNziQxU9m: 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA0NmFmMTlkNjgyNTcxNTVjNWU5YWYwMGEzZjU0MjIxMDZhNjc4Yzk4NGFiNGVkMTYxNDY4MzJiYzk4MTkwNGz4La8=: ]] 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA0NmFmMTlkNjgyNTcxNTVjNWU5YWYwMGEzZjU0MjIxMDZhNjc4Yzk4NGFiNGVkMTYxNDY4MzJiYzk4MTkwNGz4La8=: 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.164 nvme0n1 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:31.164 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.423 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.423 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:31.423 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.423 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.423 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.423 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:31.423 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:31.423 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:31.423 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:31.423 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:31.423 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:31.423 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODllNzg0YTNmNzMzZDY1OTQ4ODY0YjRjOWExZGJlMDQyNzU5ODMxZmQ1YzI5Njlj2LsPoQ==: 00:16:31.423 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: 00:16:31.423 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:31.423 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:31.423 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODllNzg0YTNmNzMzZDY1OTQ4ODY0YjRjOWExZGJlMDQyNzU5ODMxZmQ1YzI5Njlj2LsPoQ==: 00:16:31.423 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: ]] 00:16:31.423 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: 00:16:31.423 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:16:31.423 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:31.423 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:31.423 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:31.423 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:31.423 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:31.423 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:31.423 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.423 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.423 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.423 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:31.423 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:31.423 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:31.423 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:31.423 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:31.423 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:31.423 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:31.423 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:31.423 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:31.423 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:31.423 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:31.423 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.423 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.424 10:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.424 nvme0n1 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Q2OTc1ZDYyMTViODk0YzczZjk3ODQwZjYxZjAyNTF5t7NX: 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjE3YmQyMTA2ZmJlZWFiOTk1ZTBkZTVkNjVhNGRmYjGZMBxR: 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Q2OTc1ZDYyMTViODk0YzczZjk3ODQwZjYxZjAyNTF5t7NX: 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjE3YmQyMTA2ZmJlZWFiOTk1ZTBkZTVkNjVhNGRmYjGZMBxR: ]] 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjE3YmQyMTA2ZmJlZWFiOTk1ZTBkZTVkNjVhNGRmYjGZMBxR: 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.424 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.682 nvme0n1 00:16:31.682 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.682 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:31.682 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.682 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:31.682 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.682 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.682 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.682 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:31.682 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.682 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.682 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.683 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:31.683 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:16:31.683 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:31.683 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:31.683 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:31.683 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:31.683 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg0YTQ5YTQ1ODk2YWI3NDFkMmFmYWY3YTBlYTQ0NDE3OTg2ZDhmNTk1YTUwZTE01nVw9g==: 00:16:31.683 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY1MDBjMWZiZTk3NDJjODg5NDRkNjk2ZjE1ZTRkNDdo+wKH: 00:16:31.683 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:31.683 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:31.683 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg0YTQ5YTQ1ODk2YWI3NDFkMmFmYWY3YTBlYTQ0NDE3OTg2ZDhmNTk1YTUwZTE01nVw9g==: 00:16:31.683 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY1MDBjMWZiZTk3NDJjODg5NDRkNjk2ZjE1ZTRkNDdo+wKH: ]] 00:16:31.683 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY1MDBjMWZiZTk3NDJjODg5NDRkNjk2ZjE1ZTRkNDdo+wKH: 00:16:31.683 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:16:31.683 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:31.683 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:31.683 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:31.683 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:31.683 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:31.683 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:31.683 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.683 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.683 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.683 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:31.683 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:31.683 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:31.683 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:31.683 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:31.683 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:31.683 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:31.683 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:31.683 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:31.683 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:31.683 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:31.683 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:31.683 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.683 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.683 nvme0n1 00:16:31.683 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.683 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:31.683 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.683 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.683 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:31.942 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODA3YzBiMmFiZWE2N2RhZGNhYzliNDEyZGQ2NzVjMzY3OTQ3YjVkNzk0NmRiYTE0MjU0MDliYjI2NzY5NWU3NILi6fE=: 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODA3YzBiMmFiZWE2N2RhZGNhYzliNDEyZGQ2NzVjMzY3OTQ3YjVkNzk0NmRiYTE0MjU0MDliYjI2NzY5NWU3NILi6fE=: 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.943 nvme0n1 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTEzZGJhZGEzN2QyNDAwZWFhMTAwODZhMTBhNTFiNziQxU9m: 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA0NmFmMTlkNjgyNTcxNTVjNWU5YWYwMGEzZjU0MjIxMDZhNjc4Yzk4NGFiNGVkMTYxNDY4MzJiYzk4MTkwNGz4La8=: 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:31.943 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:32.510 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTEzZGJhZGEzN2QyNDAwZWFhMTAwODZhMTBhNTFiNziQxU9m: 00:16:32.510 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA0NmFmMTlkNjgyNTcxNTVjNWU5YWYwMGEzZjU0MjIxMDZhNjc4Yzk4NGFiNGVkMTYxNDY4MzJiYzk4MTkwNGz4La8=: ]] 00:16:32.510 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA0NmFmMTlkNjgyNTcxNTVjNWU5YWYwMGEzZjU0MjIxMDZhNjc4Yzk4NGFiNGVkMTYxNDY4MzJiYzk4MTkwNGz4La8=: 00:16:32.510 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:16:32.510 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:32.510 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:32.510 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:32.510 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:32.510 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:32.510 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:32.510 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.510 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.510 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.510 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:32.510 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:32.510 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:32.510 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:32.510 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:32.510 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:32.510 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:32.510 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:32.510 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:32.510 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:32.510 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:32.510 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.510 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.511 10:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.511 nvme0n1 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODllNzg0YTNmNzMzZDY1OTQ4ODY0YjRjOWExZGJlMDQyNzU5ODMxZmQ1YzI5Njlj2LsPoQ==: 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODllNzg0YTNmNzMzZDY1OTQ4ODY0YjRjOWExZGJlMDQyNzU5ODMxZmQ1YzI5Njlj2LsPoQ==: 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: ]] 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.511 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.769 nvme0n1 00:16:32.769 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Q2OTc1ZDYyMTViODk0YzczZjk3ODQwZjYxZjAyNTF5t7NX: 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjE3YmQyMTA2ZmJlZWFiOTk1ZTBkZTVkNjVhNGRmYjGZMBxR: 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Q2OTc1ZDYyMTViODk0YzczZjk3ODQwZjYxZjAyNTF5t7NX: 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjE3YmQyMTA2ZmJlZWFiOTk1ZTBkZTVkNjVhNGRmYjGZMBxR: ]] 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjE3YmQyMTA2ZmJlZWFiOTk1ZTBkZTVkNjVhNGRmYjGZMBxR: 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.770 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.029 nvme0n1 00:16:33.029 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.029 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:33.029 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:33.029 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.029 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.029 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.029 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.029 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:33.029 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.029 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.029 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.029 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:33.029 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:16:33.029 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:33.029 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:33.029 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:33.029 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:33.029 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg0YTQ5YTQ1ODk2YWI3NDFkMmFmYWY3YTBlYTQ0NDE3OTg2ZDhmNTk1YTUwZTE01nVw9g==: 00:16:33.029 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY1MDBjMWZiZTk3NDJjODg5NDRkNjk2ZjE1ZTRkNDdo+wKH: 00:16:33.029 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:33.029 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:33.029 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg0YTQ5YTQ1ODk2YWI3NDFkMmFmYWY3YTBlYTQ0NDE3OTg2ZDhmNTk1YTUwZTE01nVw9g==: 00:16:33.029 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY1MDBjMWZiZTk3NDJjODg5NDRkNjk2ZjE1ZTRkNDdo+wKH: ]] 00:16:33.029 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY1MDBjMWZiZTk3NDJjODg5NDRkNjk2ZjE1ZTRkNDdo+wKH: 00:16:33.029 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:16:33.029 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:33.029 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:33.029 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:33.030 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:33.030 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:33.030 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:33.030 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.030 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.030 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.030 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:33.030 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:33.030 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:33.030 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:33.030 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:33.030 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:33.030 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:33.030 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:33.030 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:33.030 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:33.030 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:33.030 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:33.030 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.030 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.030 nvme0n1 00:16:33.030 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.030 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:33.030 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:33.030 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.030 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.030 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODA3YzBiMmFiZWE2N2RhZGNhYzliNDEyZGQ2NzVjMzY3OTQ3YjVkNzk0NmRiYTE0MjU0MDliYjI2NzY5NWU3NILi6fE=: 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODA3YzBiMmFiZWE2N2RhZGNhYzliNDEyZGQ2NzVjMzY3OTQ3YjVkNzk0NmRiYTE0MjU0MDliYjI2NzY5NWU3NILi6fE=: 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.289 nvme0n1 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.289 10:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.289 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.289 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:33.289 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:33.289 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:16:33.289 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:33.289 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:33.289 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:33.289 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:33.289 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTEzZGJhZGEzN2QyNDAwZWFhMTAwODZhMTBhNTFiNziQxU9m: 00:16:33.289 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA0NmFmMTlkNjgyNTcxNTVjNWU5YWYwMGEzZjU0MjIxMDZhNjc4Yzk4NGFiNGVkMTYxNDY4MzJiYzk4MTkwNGz4La8=: 00:16:33.289 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:33.289 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTEzZGJhZGEzN2QyNDAwZWFhMTAwODZhMTBhNTFiNziQxU9m: 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA0NmFmMTlkNjgyNTcxNTVjNWU5YWYwMGEzZjU0MjIxMDZhNjc4Yzk4NGFiNGVkMTYxNDY4MzJiYzk4MTkwNGz4La8=: ]] 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA0NmFmMTlkNjgyNTcxNTVjNWU5YWYwMGEzZjU0MjIxMDZhNjc4Yzk4NGFiNGVkMTYxNDY4MzJiYzk4MTkwNGz4La8=: 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.225 nvme0n1 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODllNzg0YTNmNzMzZDY1OTQ4ODY0YjRjOWExZGJlMDQyNzU5ODMxZmQ1YzI5Njlj2LsPoQ==: 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODllNzg0YTNmNzMzZDY1OTQ4ODY0YjRjOWExZGJlMDQyNzU5ODMxZmQ1YzI5Njlj2LsPoQ==: 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: ]] 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:34.225 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:34.226 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.226 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.226 10:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.484 nvme0n1 00:16:34.484 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.484 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:34.484 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.484 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:34.484 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.484 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.484 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.484 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:34.484 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.484 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.484 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.484 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:34.484 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:16:34.484 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:34.484 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:34.484 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:34.484 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:34.484 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Q2OTc1ZDYyMTViODk0YzczZjk3ODQwZjYxZjAyNTF5t7NX: 00:16:34.484 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjE3YmQyMTA2ZmJlZWFiOTk1ZTBkZTVkNjVhNGRmYjGZMBxR: 00:16:34.484 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:34.484 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:34.484 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Q2OTc1ZDYyMTViODk0YzczZjk3ODQwZjYxZjAyNTF5t7NX: 00:16:34.484 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjE3YmQyMTA2ZmJlZWFiOTk1ZTBkZTVkNjVhNGRmYjGZMBxR: ]] 00:16:34.484 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjE3YmQyMTA2ZmJlZWFiOTk1ZTBkZTVkNjVhNGRmYjGZMBxR: 00:16:34.484 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:16:34.484 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:34.484 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:34.484 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:34.484 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:34.484 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:34.484 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:34.484 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.484 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.484 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.484 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:34.484 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:34.484 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:34.485 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:34.485 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:34.485 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:34.485 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:34.485 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:34.485 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:34.485 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:34.485 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:34.485 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.485 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.485 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.747 nvme0n1 00:16:34.747 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.747 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:34.747 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:34.747 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.747 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.747 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.748 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.748 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:34.748 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.748 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.748 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.748 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:34.748 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:16:34.748 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:34.748 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:34.748 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:34.748 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:34.748 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg0YTQ5YTQ1ODk2YWI3NDFkMmFmYWY3YTBlYTQ0NDE3OTg2ZDhmNTk1YTUwZTE01nVw9g==: 00:16:34.748 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY1MDBjMWZiZTk3NDJjODg5NDRkNjk2ZjE1ZTRkNDdo+wKH: 00:16:34.748 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:34.748 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:34.748 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg0YTQ5YTQ1ODk2YWI3NDFkMmFmYWY3YTBlYTQ0NDE3OTg2ZDhmNTk1YTUwZTE01nVw9g==: 00:16:34.748 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY1MDBjMWZiZTk3NDJjODg5NDRkNjk2ZjE1ZTRkNDdo+wKH: ]] 00:16:34.748 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY1MDBjMWZiZTk3NDJjODg5NDRkNjk2ZjE1ZTRkNDdo+wKH: 00:16:34.748 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:16:34.748 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:34.748 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:34.748 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:34.748 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:34.748 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:34.748 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:34.748 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.748 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.006 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.006 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:35.006 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:35.006 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:35.006 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:35.006 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:35.006 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:35.006 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:35.006 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:35.006 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:35.006 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:35.006 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:35.006 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:35.006 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.006 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.006 nvme0n1 00:16:35.006 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.006 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:35.006 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:35.006 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.006 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.006 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.006 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.006 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:35.006 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.006 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.263 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.264 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:35.264 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:16:35.264 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:35.264 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:35.264 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:35.264 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:35.264 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODA3YzBiMmFiZWE2N2RhZGNhYzliNDEyZGQ2NzVjMzY3OTQ3YjVkNzk0NmRiYTE0MjU0MDliYjI2NzY5NWU3NILi6fE=: 00:16:35.264 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:35.264 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:35.264 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:35.264 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODA3YzBiMmFiZWE2N2RhZGNhYzliNDEyZGQ2NzVjMzY3OTQ3YjVkNzk0NmRiYTE0MjU0MDliYjI2NzY5NWU3NILi6fE=: 00:16:35.264 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:35.264 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:16:35.264 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:35.264 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:35.264 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:35.264 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:35.264 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:35.264 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:35.264 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.264 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.264 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.264 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:35.264 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:35.264 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:35.264 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:35.264 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:35.264 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:35.264 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:35.264 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:35.264 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:35.264 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:35.264 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:35.264 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:35.264 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.264 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.264 nvme0n1 00:16:35.264 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.264 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:35.264 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.264 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:35.264 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.264 10:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.521 10:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.521 10:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:35.521 10:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.522 10:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.522 10:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.522 10:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:35.522 10:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:35.522 10:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:16:35.522 10:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:35.522 10:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:35.522 10:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:35.522 10:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:35.522 10:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTEzZGJhZGEzN2QyNDAwZWFhMTAwODZhMTBhNTFiNziQxU9m: 00:16:35.522 10:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA0NmFmMTlkNjgyNTcxNTVjNWU5YWYwMGEzZjU0MjIxMDZhNjc4Yzk4NGFiNGVkMTYxNDY4MzJiYzk4MTkwNGz4La8=: 00:16:35.522 10:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:35.522 10:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:37.420 10:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTEzZGJhZGEzN2QyNDAwZWFhMTAwODZhMTBhNTFiNziQxU9m: 00:16:37.420 10:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA0NmFmMTlkNjgyNTcxNTVjNWU5YWYwMGEzZjU0MjIxMDZhNjc4Yzk4NGFiNGVkMTYxNDY4MzJiYzk4MTkwNGz4La8=: ]] 00:16:37.420 10:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA0NmFmMTlkNjgyNTcxNTVjNWU5YWYwMGEzZjU0MjIxMDZhNjc4Yzk4NGFiNGVkMTYxNDY4MzJiYzk4MTkwNGz4La8=: 00:16:37.420 10:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:16:37.420 10:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:37.420 10:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:37.420 10:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:37.420 10:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:37.420 10:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:37.420 10:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:37.420 10:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.420 10:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.420 10:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.420 10:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:37.420 10:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:37.420 10:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:37.420 10:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:37.420 10:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:37.420 10:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:37.420 10:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:37.420 10:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:37.420 10:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:37.420 10:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:37.420 10:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:37.421 10:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.421 10:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.421 10:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.421 nvme0n1 00:16:37.421 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.421 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:37.421 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.421 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:37.421 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.421 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.679 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.679 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:37.679 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.679 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.679 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.679 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:37.679 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:16:37.679 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:37.679 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:37.679 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:37.679 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:37.679 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODllNzg0YTNmNzMzZDY1OTQ4ODY0YjRjOWExZGJlMDQyNzU5ODMxZmQ1YzI5Njlj2LsPoQ==: 00:16:37.679 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: 00:16:37.679 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:37.679 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:37.679 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODllNzg0YTNmNzMzZDY1OTQ4ODY0YjRjOWExZGJlMDQyNzU5ODMxZmQ1YzI5Njlj2LsPoQ==: 00:16:37.679 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: ]] 00:16:37.679 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: 00:16:37.679 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:16:37.679 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:37.679 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:37.679 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:37.679 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:37.679 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:37.679 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:37.679 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.679 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.679 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.679 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:37.679 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:37.679 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:37.679 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:37.679 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:37.679 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:37.679 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:37.679 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:37.679 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:37.679 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:37.679 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:37.679 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.679 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.679 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.937 nvme0n1 00:16:37.937 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.937 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:37.937 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.937 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:37.937 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.937 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.938 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.938 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:37.938 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.938 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.938 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.938 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:37.938 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:16:37.938 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:37.938 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:37.938 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:37.938 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:37.938 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Q2OTc1ZDYyMTViODk0YzczZjk3ODQwZjYxZjAyNTF5t7NX: 00:16:37.938 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjE3YmQyMTA2ZmJlZWFiOTk1ZTBkZTVkNjVhNGRmYjGZMBxR: 00:16:37.938 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:37.938 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:37.938 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Q2OTc1ZDYyMTViODk0YzczZjk3ODQwZjYxZjAyNTF5t7NX: 00:16:37.938 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjE3YmQyMTA2ZmJlZWFiOTk1ZTBkZTVkNjVhNGRmYjGZMBxR: ]] 00:16:37.938 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjE3YmQyMTA2ZmJlZWFiOTk1ZTBkZTVkNjVhNGRmYjGZMBxR: 00:16:37.938 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:16:37.938 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:37.938 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:37.938 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:37.938 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:37.938 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:37.938 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:37.938 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.938 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.938 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.938 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:37.938 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:37.938 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:37.938 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:37.938 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:37.938 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:37.938 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:37.938 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:37.938 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:37.938 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:37.938 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:37.938 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.938 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.938 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.504 nvme0n1 00:16:38.504 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.504 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:38.504 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:38.504 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.504 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.504 10:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.504 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.504 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:38.504 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.504 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.504 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.504 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:38.504 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:16:38.504 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:38.504 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:38.504 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:38.504 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:38.504 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg0YTQ5YTQ1ODk2YWI3NDFkMmFmYWY3YTBlYTQ0NDE3OTg2ZDhmNTk1YTUwZTE01nVw9g==: 00:16:38.504 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY1MDBjMWZiZTk3NDJjODg5NDRkNjk2ZjE1ZTRkNDdo+wKH: 00:16:38.504 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:38.504 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:38.505 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg0YTQ5YTQ1ODk2YWI3NDFkMmFmYWY3YTBlYTQ0NDE3OTg2ZDhmNTk1YTUwZTE01nVw9g==: 00:16:38.505 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY1MDBjMWZiZTk3NDJjODg5NDRkNjk2ZjE1ZTRkNDdo+wKH: ]] 00:16:38.505 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY1MDBjMWZiZTk3NDJjODg5NDRkNjk2ZjE1ZTRkNDdo+wKH: 00:16:38.505 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:16:38.505 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:38.505 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:38.505 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:38.505 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:38.505 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:38.505 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:38.505 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.505 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.505 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.505 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:38.505 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:38.505 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:38.505 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:38.505 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:38.505 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:38.505 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:38.505 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:38.505 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:38.505 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:38.505 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:38.505 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:38.505 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.505 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.763 nvme0n1 00:16:38.763 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.763 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:38.763 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:38.763 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.763 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.763 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.763 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.763 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:38.763 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.763 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.763 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.763 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:38.763 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:16:38.763 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:38.763 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:38.763 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:38.763 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:38.763 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODA3YzBiMmFiZWE2N2RhZGNhYzliNDEyZGQ2NzVjMzY3OTQ3YjVkNzk0NmRiYTE0MjU0MDliYjI2NzY5NWU3NILi6fE=: 00:16:38.763 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:38.763 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:38.763 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:38.763 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODA3YzBiMmFiZWE2N2RhZGNhYzliNDEyZGQ2NzVjMzY3OTQ3YjVkNzk0NmRiYTE0MjU0MDliYjI2NzY5NWU3NILi6fE=: 00:16:38.763 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:38.763 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:16:38.763 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:38.763 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:38.763 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:38.763 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:38.763 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:38.763 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:38.763 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.763 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.763 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.763 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:38.764 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:38.764 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:38.764 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:38.764 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:38.764 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:38.764 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:38.764 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:38.764 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:38.764 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:38.764 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:38.764 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:38.764 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.764 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.330 nvme0n1 00:16:39.330 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.330 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:39.330 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.330 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.330 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:39.330 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.330 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.330 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:39.330 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.330 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.330 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.330 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:39.330 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:39.330 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:16:39.330 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:39.330 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:39.330 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:39.330 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:39.330 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTEzZGJhZGEzN2QyNDAwZWFhMTAwODZhMTBhNTFiNziQxU9m: 00:16:39.330 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA0NmFmMTlkNjgyNTcxNTVjNWU5YWYwMGEzZjU0MjIxMDZhNjc4Yzk4NGFiNGVkMTYxNDY4MzJiYzk4MTkwNGz4La8=: 00:16:39.330 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:39.330 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:39.330 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTEzZGJhZGEzN2QyNDAwZWFhMTAwODZhMTBhNTFiNziQxU9m: 00:16:39.330 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA0NmFmMTlkNjgyNTcxNTVjNWU5YWYwMGEzZjU0MjIxMDZhNjc4Yzk4NGFiNGVkMTYxNDY4MzJiYzk4MTkwNGz4La8=: ]] 00:16:39.330 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA0NmFmMTlkNjgyNTcxNTVjNWU5YWYwMGEzZjU0MjIxMDZhNjc4Yzk4NGFiNGVkMTYxNDY4MzJiYzk4MTkwNGz4La8=: 00:16:39.330 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:16:39.330 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:39.330 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:39.330 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:39.330 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:39.330 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:39.330 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:39.330 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.330 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.330 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.330 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:39.330 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:39.330 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:39.330 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:39.331 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:39.331 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:39.331 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:39.331 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:39.331 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:39.331 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:39.331 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:39.331 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.331 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.331 10:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.896 nvme0n1 00:16:39.896 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.896 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:39.896 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.896 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.896 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:39.896 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.896 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.896 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:39.896 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.896 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.896 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.896 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:39.896 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:16:39.896 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:39.896 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:39.897 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:39.897 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:39.897 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODllNzg0YTNmNzMzZDY1OTQ4ODY0YjRjOWExZGJlMDQyNzU5ODMxZmQ1YzI5Njlj2LsPoQ==: 00:16:39.897 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: 00:16:39.897 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:39.897 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:39.897 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODllNzg0YTNmNzMzZDY1OTQ4ODY0YjRjOWExZGJlMDQyNzU5ODMxZmQ1YzI5Njlj2LsPoQ==: 00:16:39.897 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: ]] 00:16:39.897 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: 00:16:39.897 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:16:39.897 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:39.897 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:39.897 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:39.897 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:39.897 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:39.897 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:39.897 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.897 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.897 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.897 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:39.897 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:39.897 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:39.897 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:39.897 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:39.897 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:39.897 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:39.897 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:39.897 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:39.897 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:39.897 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:39.897 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.897 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.897 10:55:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.462 nvme0n1 00:16:40.462 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.462 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:40.462 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:40.462 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.462 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.462 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.462 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.462 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:40.462 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.462 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.720 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.720 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:40.720 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:16:40.720 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:40.720 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:40.720 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:40.720 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:40.720 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Q2OTc1ZDYyMTViODk0YzczZjk3ODQwZjYxZjAyNTF5t7NX: 00:16:40.720 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjE3YmQyMTA2ZmJlZWFiOTk1ZTBkZTVkNjVhNGRmYjGZMBxR: 00:16:40.720 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:40.720 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:40.720 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Q2OTc1ZDYyMTViODk0YzczZjk3ODQwZjYxZjAyNTF5t7NX: 00:16:40.720 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjE3YmQyMTA2ZmJlZWFiOTk1ZTBkZTVkNjVhNGRmYjGZMBxR: ]] 00:16:40.720 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjE3YmQyMTA2ZmJlZWFiOTk1ZTBkZTVkNjVhNGRmYjGZMBxR: 00:16:40.720 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:16:40.720 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:40.720 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:40.720 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:40.720 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:40.720 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:40.720 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:40.720 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.720 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.720 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.720 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:40.720 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:40.720 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:40.720 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:40.720 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:40.720 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:40.720 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:40.720 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:40.720 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:40.720 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:40.720 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:40.720 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.720 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.720 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.285 nvme0n1 00:16:41.285 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.285 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:41.285 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.285 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:41.285 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.285 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.285 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.285 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:41.285 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.285 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.285 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.285 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:41.285 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:16:41.285 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:41.285 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:41.285 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:41.285 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:41.285 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg0YTQ5YTQ1ODk2YWI3NDFkMmFmYWY3YTBlYTQ0NDE3OTg2ZDhmNTk1YTUwZTE01nVw9g==: 00:16:41.285 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY1MDBjMWZiZTk3NDJjODg5NDRkNjk2ZjE1ZTRkNDdo+wKH: 00:16:41.285 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:41.285 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:41.285 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg0YTQ5YTQ1ODk2YWI3NDFkMmFmYWY3YTBlYTQ0NDE3OTg2ZDhmNTk1YTUwZTE01nVw9g==: 00:16:41.285 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY1MDBjMWZiZTk3NDJjODg5NDRkNjk2ZjE1ZTRkNDdo+wKH: ]] 00:16:41.285 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY1MDBjMWZiZTk3NDJjODg5NDRkNjk2ZjE1ZTRkNDdo+wKH: 00:16:41.285 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:16:41.285 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:41.285 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:41.285 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:41.285 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:41.285 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:41.286 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:41.286 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.286 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.286 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.286 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:41.286 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:41.286 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:41.286 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:41.286 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:41.286 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:41.286 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:41.286 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:41.286 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:41.286 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:41.286 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:41.286 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:41.286 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.286 10:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.852 nvme0n1 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODA3YzBiMmFiZWE2N2RhZGNhYzliNDEyZGQ2NzVjMzY3OTQ3YjVkNzk0NmRiYTE0MjU0MDliYjI2NzY5NWU3NILi6fE=: 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODA3YzBiMmFiZWE2N2RhZGNhYzliNDEyZGQ2NzVjMzY3OTQ3YjVkNzk0NmRiYTE0MjU0MDliYjI2NzY5NWU3NILi6fE=: 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.852 10:55:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.424 nvme0n1 00:16:42.424 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.424 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:42.424 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:42.424 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.424 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.424 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.424 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.424 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:42.424 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.424 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.698 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.698 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:42.698 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:42.698 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:42.698 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:16:42.698 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:42.698 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:42.698 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:42.698 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:42.698 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTEzZGJhZGEzN2QyNDAwZWFhMTAwODZhMTBhNTFiNziQxU9m: 00:16:42.698 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA0NmFmMTlkNjgyNTcxNTVjNWU5YWYwMGEzZjU0MjIxMDZhNjc4Yzk4NGFiNGVkMTYxNDY4MzJiYzk4MTkwNGz4La8=: 00:16:42.698 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:42.698 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:42.698 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTEzZGJhZGEzN2QyNDAwZWFhMTAwODZhMTBhNTFiNziQxU9m: 00:16:42.698 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA0NmFmMTlkNjgyNTcxNTVjNWU5YWYwMGEzZjU0MjIxMDZhNjc4Yzk4NGFiNGVkMTYxNDY4MzJiYzk4MTkwNGz4La8=: ]] 00:16:42.698 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA0NmFmMTlkNjgyNTcxNTVjNWU5YWYwMGEzZjU0MjIxMDZhNjc4Yzk4NGFiNGVkMTYxNDY4MzJiYzk4MTkwNGz4La8=: 00:16:42.698 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:16:42.698 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:42.698 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:42.698 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:42.698 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:42.698 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:42.698 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:42.698 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.698 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.698 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.698 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:42.698 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:42.698 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:42.698 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:42.698 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:42.698 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:42.698 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.699 nvme0n1 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODllNzg0YTNmNzMzZDY1OTQ4ODY0YjRjOWExZGJlMDQyNzU5ODMxZmQ1YzI5Njlj2LsPoQ==: 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODllNzg0YTNmNzMzZDY1OTQ4ODY0YjRjOWExZGJlMDQyNzU5ODMxZmQ1YzI5Njlj2LsPoQ==: 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: ]] 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.699 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.957 nvme0n1 00:16:42.957 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Q2OTc1ZDYyMTViODk0YzczZjk3ODQwZjYxZjAyNTF5t7NX: 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjE3YmQyMTA2ZmJlZWFiOTk1ZTBkZTVkNjVhNGRmYjGZMBxR: 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Q2OTc1ZDYyMTViODk0YzczZjk3ODQwZjYxZjAyNTF5t7NX: 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjE3YmQyMTA2ZmJlZWFiOTk1ZTBkZTVkNjVhNGRmYjGZMBxR: ]] 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjE3YmQyMTA2ZmJlZWFiOTk1ZTBkZTVkNjVhNGRmYjGZMBxR: 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.958 nvme0n1 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.958 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg0YTQ5YTQ1ODk2YWI3NDFkMmFmYWY3YTBlYTQ0NDE3OTg2ZDhmNTk1YTUwZTE01nVw9g==: 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY1MDBjMWZiZTk3NDJjODg5NDRkNjk2ZjE1ZTRkNDdo+wKH: 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg0YTQ5YTQ1ODk2YWI3NDFkMmFmYWY3YTBlYTQ0NDE3OTg2ZDhmNTk1YTUwZTE01nVw9g==: 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY1MDBjMWZiZTk3NDJjODg5NDRkNjk2ZjE1ZTRkNDdo+wKH: ]] 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY1MDBjMWZiZTk3NDJjODg5NDRkNjk2ZjE1ZTRkNDdo+wKH: 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.217 nvme0n1 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODA3YzBiMmFiZWE2N2RhZGNhYzliNDEyZGQ2NzVjMzY3OTQ3YjVkNzk0NmRiYTE0MjU0MDliYjI2NzY5NWU3NILi6fE=: 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODA3YzBiMmFiZWE2N2RhZGNhYzliNDEyZGQ2NzVjMzY3OTQ3YjVkNzk0NmRiYTE0MjU0MDliYjI2NzY5NWU3NILi6fE=: 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.217 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.476 nvme0n1 00:16:43.476 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.476 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:43.476 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:43.476 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.476 10:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.476 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.476 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.476 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:43.476 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.476 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.476 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.476 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:43.476 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:43.476 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:16:43.476 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:43.476 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:43.476 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:43.477 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:43.477 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTEzZGJhZGEzN2QyNDAwZWFhMTAwODZhMTBhNTFiNziQxU9m: 00:16:43.477 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA0NmFmMTlkNjgyNTcxNTVjNWU5YWYwMGEzZjU0MjIxMDZhNjc4Yzk4NGFiNGVkMTYxNDY4MzJiYzk4MTkwNGz4La8=: 00:16:43.477 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:43.477 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:43.477 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTEzZGJhZGEzN2QyNDAwZWFhMTAwODZhMTBhNTFiNziQxU9m: 00:16:43.477 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA0NmFmMTlkNjgyNTcxNTVjNWU5YWYwMGEzZjU0MjIxMDZhNjc4Yzk4NGFiNGVkMTYxNDY4MzJiYzk4MTkwNGz4La8=: ]] 00:16:43.477 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA0NmFmMTlkNjgyNTcxNTVjNWU5YWYwMGEzZjU0MjIxMDZhNjc4Yzk4NGFiNGVkMTYxNDY4MzJiYzk4MTkwNGz4La8=: 00:16:43.477 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:16:43.477 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:43.477 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:43.477 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:43.477 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:43.477 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:43.477 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:43.477 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.477 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.477 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.477 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:43.477 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:43.477 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:43.477 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:43.477 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:43.477 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:43.477 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:43.477 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:43.477 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:43.477 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:43.477 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:43.477 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.477 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.477 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.477 nvme0n1 00:16:43.477 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.477 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:43.477 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.477 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.477 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:43.477 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODllNzg0YTNmNzMzZDY1OTQ4ODY0YjRjOWExZGJlMDQyNzU5ODMxZmQ1YzI5Njlj2LsPoQ==: 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODllNzg0YTNmNzMzZDY1OTQ4ODY0YjRjOWExZGJlMDQyNzU5ODMxZmQ1YzI5Njlj2LsPoQ==: 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: ]] 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.736 nvme0n1 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Q2OTc1ZDYyMTViODk0YzczZjk3ODQwZjYxZjAyNTF5t7NX: 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjE3YmQyMTA2ZmJlZWFiOTk1ZTBkZTVkNjVhNGRmYjGZMBxR: 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Q2OTc1ZDYyMTViODk0YzczZjk3ODQwZjYxZjAyNTF5t7NX: 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjE3YmQyMTA2ZmJlZWFiOTk1ZTBkZTVkNjVhNGRmYjGZMBxR: ]] 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjE3YmQyMTA2ZmJlZWFiOTk1ZTBkZTVkNjVhNGRmYjGZMBxR: 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.736 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.994 nvme0n1 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg0YTQ5YTQ1ODk2YWI3NDFkMmFmYWY3YTBlYTQ0NDE3OTg2ZDhmNTk1YTUwZTE01nVw9g==: 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY1MDBjMWZiZTk3NDJjODg5NDRkNjk2ZjE1ZTRkNDdo+wKH: 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg0YTQ5YTQ1ODk2YWI3NDFkMmFmYWY3YTBlYTQ0NDE3OTg2ZDhmNTk1YTUwZTE01nVw9g==: 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY1MDBjMWZiZTk3NDJjODg5NDRkNjk2ZjE1ZTRkNDdo+wKH: ]] 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY1MDBjMWZiZTk3NDJjODg5NDRkNjk2ZjE1ZTRkNDdo+wKH: 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.994 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.252 nvme0n1 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODA3YzBiMmFiZWE2N2RhZGNhYzliNDEyZGQ2NzVjMzY3OTQ3YjVkNzk0NmRiYTE0MjU0MDliYjI2NzY5NWU3NILi6fE=: 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODA3YzBiMmFiZWE2N2RhZGNhYzliNDEyZGQ2NzVjMzY3OTQ3YjVkNzk0NmRiYTE0MjU0MDliYjI2NzY5NWU3NILi6fE=: 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.252 nvme0n1 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:44.252 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.511 10:55:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTEzZGJhZGEzN2QyNDAwZWFhMTAwODZhMTBhNTFiNziQxU9m: 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA0NmFmMTlkNjgyNTcxNTVjNWU5YWYwMGEzZjU0MjIxMDZhNjc4Yzk4NGFiNGVkMTYxNDY4MzJiYzk4MTkwNGz4La8=: 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTEzZGJhZGEzN2QyNDAwZWFhMTAwODZhMTBhNTFiNziQxU9m: 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA0NmFmMTlkNjgyNTcxNTVjNWU5YWYwMGEzZjU0MjIxMDZhNjc4Yzk4NGFiNGVkMTYxNDY4MzJiYzk4MTkwNGz4La8=: ]] 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA0NmFmMTlkNjgyNTcxNTVjNWU5YWYwMGEzZjU0MjIxMDZhNjc4Yzk4NGFiNGVkMTYxNDY4MzJiYzk4MTkwNGz4La8=: 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.511 nvme0n1 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.511 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:44.770 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.770 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.770 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:44.770 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.770 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.770 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.770 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:44.770 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:16:44.770 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:44.770 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:44.770 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:44.770 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:44.770 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODllNzg0YTNmNzMzZDY1OTQ4ODY0YjRjOWExZGJlMDQyNzU5ODMxZmQ1YzI5Njlj2LsPoQ==: 00:16:44.770 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: 00:16:44.770 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:44.770 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:44.770 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODllNzg0YTNmNzMzZDY1OTQ4ODY0YjRjOWExZGJlMDQyNzU5ODMxZmQ1YzI5Njlj2LsPoQ==: 00:16:44.770 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: ]] 00:16:44.770 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: 00:16:44.770 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:16:44.770 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:44.770 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:44.770 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:44.770 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:44.770 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:44.770 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:44.770 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.770 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.770 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.770 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:44.770 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:44.770 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:44.770 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:44.770 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:44.770 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:44.770 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:44.770 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:44.770 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:44.770 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:44.770 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:44.770 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.770 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.770 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.770 nvme0n1 00:16:44.770 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Q2OTc1ZDYyMTViODk0YzczZjk3ODQwZjYxZjAyNTF5t7NX: 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjE3YmQyMTA2ZmJlZWFiOTk1ZTBkZTVkNjVhNGRmYjGZMBxR: 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Q2OTc1ZDYyMTViODk0YzczZjk3ODQwZjYxZjAyNTF5t7NX: 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjE3YmQyMTA2ZmJlZWFiOTk1ZTBkZTVkNjVhNGRmYjGZMBxR: ]] 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjE3YmQyMTA2ZmJlZWFiOTk1ZTBkZTVkNjVhNGRmYjGZMBxR: 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.029 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.288 nvme0n1 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg0YTQ5YTQ1ODk2YWI3NDFkMmFmYWY3YTBlYTQ0NDE3OTg2ZDhmNTk1YTUwZTE01nVw9g==: 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY1MDBjMWZiZTk3NDJjODg5NDRkNjk2ZjE1ZTRkNDdo+wKH: 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg0YTQ5YTQ1ODk2YWI3NDFkMmFmYWY3YTBlYTQ0NDE3OTg2ZDhmNTk1YTUwZTE01nVw9g==: 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY1MDBjMWZiZTk3NDJjODg5NDRkNjk2ZjE1ZTRkNDdo+wKH: ]] 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY1MDBjMWZiZTk3NDJjODg5NDRkNjk2ZjE1ZTRkNDdo+wKH: 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.288 10:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.547 nvme0n1 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODA3YzBiMmFiZWE2N2RhZGNhYzliNDEyZGQ2NzVjMzY3OTQ3YjVkNzk0NmRiYTE0MjU0MDliYjI2NzY5NWU3NILi6fE=: 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODA3YzBiMmFiZWE2N2RhZGNhYzliNDEyZGQ2NzVjMzY3OTQ3YjVkNzk0NmRiYTE0MjU0MDliYjI2NzY5NWU3NILi6fE=: 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.547 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.806 nvme0n1 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTEzZGJhZGEzN2QyNDAwZWFhMTAwODZhMTBhNTFiNziQxU9m: 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA0NmFmMTlkNjgyNTcxNTVjNWU5YWYwMGEzZjU0MjIxMDZhNjc4Yzk4NGFiNGVkMTYxNDY4MzJiYzk4MTkwNGz4La8=: 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTEzZGJhZGEzN2QyNDAwZWFhMTAwODZhMTBhNTFiNziQxU9m: 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA0NmFmMTlkNjgyNTcxNTVjNWU5YWYwMGEzZjU0MjIxMDZhNjc4Yzk4NGFiNGVkMTYxNDY4MzJiYzk4MTkwNGz4La8=: ]] 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA0NmFmMTlkNjgyNTcxNTVjNWU5YWYwMGEzZjU0MjIxMDZhNjc4Yzk4NGFiNGVkMTYxNDY4MzJiYzk4MTkwNGz4La8=: 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.807 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.066 nvme0n1 00:16:46.066 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.066 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:46.066 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:46.066 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.066 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.066 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.066 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.066 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:46.066 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.066 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.066 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.066 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:46.066 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:16:46.066 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:46.066 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:46.066 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:46.066 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:46.066 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODllNzg0YTNmNzMzZDY1OTQ4ODY0YjRjOWExZGJlMDQyNzU5ODMxZmQ1YzI5Njlj2LsPoQ==: 00:16:46.066 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: 00:16:46.066 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:46.066 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:46.066 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODllNzg0YTNmNzMzZDY1OTQ4ODY0YjRjOWExZGJlMDQyNzU5ODMxZmQ1YzI5Njlj2LsPoQ==: 00:16:46.066 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: ]] 00:16:46.066 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: 00:16:46.066 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:16:46.066 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:46.066 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:46.066 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:46.067 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:46.067 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:46.067 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:46.067 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.067 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.067 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.067 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:46.067 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:46.067 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:46.067 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:46.067 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:46.067 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:46.067 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:46.067 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:46.067 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:46.067 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:46.067 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:46.067 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.067 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.067 10:55:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.634 nvme0n1 00:16:46.634 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.634 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:46.634 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.634 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:46.634 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.634 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.634 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.634 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:46.634 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.634 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.634 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.634 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:46.634 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:16:46.634 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:46.634 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:46.634 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:46.634 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:46.634 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Q2OTc1ZDYyMTViODk0YzczZjk3ODQwZjYxZjAyNTF5t7NX: 00:16:46.634 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjE3YmQyMTA2ZmJlZWFiOTk1ZTBkZTVkNjVhNGRmYjGZMBxR: 00:16:46.634 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:46.634 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:46.634 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Q2OTc1ZDYyMTViODk0YzczZjk3ODQwZjYxZjAyNTF5t7NX: 00:16:46.634 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjE3YmQyMTA2ZmJlZWFiOTk1ZTBkZTVkNjVhNGRmYjGZMBxR: ]] 00:16:46.634 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjE3YmQyMTA2ZmJlZWFiOTk1ZTBkZTVkNjVhNGRmYjGZMBxR: 00:16:46.634 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:16:46.634 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:46.634 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:46.634 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:46.634 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:46.634 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:46.634 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:46.634 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.634 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.634 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.634 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:46.634 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:46.634 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:46.634 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:46.634 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:46.634 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:46.635 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:46.635 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:46.635 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:46.635 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:46.635 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:46.635 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.635 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.635 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.893 nvme0n1 00:16:46.893 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.893 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:46.893 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.893 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.893 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:46.893 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.893 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.893 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:46.893 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.893 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.893 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.893 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:46.893 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:16:46.893 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:46.893 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:46.893 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:46.893 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:46.893 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg0YTQ5YTQ1ODk2YWI3NDFkMmFmYWY3YTBlYTQ0NDE3OTg2ZDhmNTk1YTUwZTE01nVw9g==: 00:16:46.893 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY1MDBjMWZiZTk3NDJjODg5NDRkNjk2ZjE1ZTRkNDdo+wKH: 00:16:46.893 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:46.893 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:46.893 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg0YTQ5YTQ1ODk2YWI3NDFkMmFmYWY3YTBlYTQ0NDE3OTg2ZDhmNTk1YTUwZTE01nVw9g==: 00:16:46.893 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY1MDBjMWZiZTk3NDJjODg5NDRkNjk2ZjE1ZTRkNDdo+wKH: ]] 00:16:46.894 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY1MDBjMWZiZTk3NDJjODg5NDRkNjk2ZjE1ZTRkNDdo+wKH: 00:16:46.894 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:16:46.894 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:46.894 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:46.894 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:46.894 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:46.894 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:46.894 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:46.894 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.894 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.894 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.894 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:46.894 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:46.894 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:46.894 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:46.894 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:46.894 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:46.894 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:46.894 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:46.894 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:46.894 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:46.894 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:46.894 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:46.894 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.894 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.460 nvme0n1 00:16:47.460 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.460 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:47.460 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:47.460 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.460 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.460 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.460 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.460 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:47.460 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.460 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.460 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.460 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:47.461 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:16:47.461 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:47.461 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:47.461 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:47.461 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:47.461 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODA3YzBiMmFiZWE2N2RhZGNhYzliNDEyZGQ2NzVjMzY3OTQ3YjVkNzk0NmRiYTE0MjU0MDliYjI2NzY5NWU3NILi6fE=: 00:16:47.461 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:47.461 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:47.461 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:47.461 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODA3YzBiMmFiZWE2N2RhZGNhYzliNDEyZGQ2NzVjMzY3OTQ3YjVkNzk0NmRiYTE0MjU0MDliYjI2NzY5NWU3NILi6fE=: 00:16:47.461 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:47.461 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:16:47.461 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:47.461 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:47.461 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:47.461 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:47.461 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:47.461 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:47.461 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.461 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.461 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.461 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:47.461 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:47.461 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:47.461 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:47.461 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:47.461 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:47.461 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:47.461 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:47.461 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:47.461 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:47.461 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:47.461 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:47.461 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.461 10:55:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.720 nvme0n1 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTEzZGJhZGEzN2QyNDAwZWFhMTAwODZhMTBhNTFiNziQxU9m: 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA0NmFmMTlkNjgyNTcxNTVjNWU5YWYwMGEzZjU0MjIxMDZhNjc4Yzk4NGFiNGVkMTYxNDY4MzJiYzk4MTkwNGz4La8=: 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTEzZGJhZGEzN2QyNDAwZWFhMTAwODZhMTBhNTFiNziQxU9m: 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA0NmFmMTlkNjgyNTcxNTVjNWU5YWYwMGEzZjU0MjIxMDZhNjc4Yzk4NGFiNGVkMTYxNDY4MzJiYzk4MTkwNGz4La8=: ]] 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA0NmFmMTlkNjgyNTcxNTVjNWU5YWYwMGEzZjU0MjIxMDZhNjc4Yzk4NGFiNGVkMTYxNDY4MzJiYzk4MTkwNGz4La8=: 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.720 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.286 nvme0n1 00:16:48.286 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.286 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:48.286 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:48.286 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.286 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.286 10:55:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.545 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.545 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:48.545 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.545 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.545 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.545 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:48.545 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:16:48.545 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:48.545 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:48.545 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:48.545 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:48.545 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODllNzg0YTNmNzMzZDY1OTQ4ODY0YjRjOWExZGJlMDQyNzU5ODMxZmQ1YzI5Njlj2LsPoQ==: 00:16:48.545 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: 00:16:48.545 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:48.545 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:48.545 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODllNzg0YTNmNzMzZDY1OTQ4ODY0YjRjOWExZGJlMDQyNzU5ODMxZmQ1YzI5Njlj2LsPoQ==: 00:16:48.545 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: ]] 00:16:48.545 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: 00:16:48.545 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:16:48.545 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:48.545 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:48.545 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:48.545 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:48.545 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:48.545 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:48.545 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.545 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.545 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.545 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:48.545 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:48.545 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:48.545 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:48.545 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:48.545 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:48.545 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:48.545 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:48.545 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:48.545 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:48.545 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:48.545 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.545 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.545 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.112 nvme0n1 00:16:49.112 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.112 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:49.112 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:49.112 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.112 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.112 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.112 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.112 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:49.112 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.112 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.112 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.112 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:49.112 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:16:49.112 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:49.112 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:49.112 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:49.112 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:49.113 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Q2OTc1ZDYyMTViODk0YzczZjk3ODQwZjYxZjAyNTF5t7NX: 00:16:49.113 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjE3YmQyMTA2ZmJlZWFiOTk1ZTBkZTVkNjVhNGRmYjGZMBxR: 00:16:49.113 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:49.113 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:49.113 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Q2OTc1ZDYyMTViODk0YzczZjk3ODQwZjYxZjAyNTF5t7NX: 00:16:49.113 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjE3YmQyMTA2ZmJlZWFiOTk1ZTBkZTVkNjVhNGRmYjGZMBxR: ]] 00:16:49.113 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjE3YmQyMTA2ZmJlZWFiOTk1ZTBkZTVkNjVhNGRmYjGZMBxR: 00:16:49.113 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:16:49.113 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:49.113 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:49.113 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:49.113 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:49.113 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:49.113 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:49.113 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.113 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.113 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.113 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:49.113 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:49.113 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:49.113 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:49.113 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:49.113 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:49.113 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:49.113 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:49.113 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:49.113 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:49.113 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:49.113 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.113 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.113 10:55:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.680 nvme0n1 00:16:49.680 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.680 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:49.680 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.680 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.680 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:49.680 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.680 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.680 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:49.680 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.680 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.680 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.680 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:49.680 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:16:49.680 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:49.680 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:49.680 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:49.680 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:49.680 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg0YTQ5YTQ1ODk2YWI3NDFkMmFmYWY3YTBlYTQ0NDE3OTg2ZDhmNTk1YTUwZTE01nVw9g==: 00:16:49.680 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY1MDBjMWZiZTk3NDJjODg5NDRkNjk2ZjE1ZTRkNDdo+wKH: 00:16:49.680 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:49.680 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:49.938 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg0YTQ5YTQ1ODk2YWI3NDFkMmFmYWY3YTBlYTQ0NDE3OTg2ZDhmNTk1YTUwZTE01nVw9g==: 00:16:49.938 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY1MDBjMWZiZTk3NDJjODg5NDRkNjk2ZjE1ZTRkNDdo+wKH: ]] 00:16:49.938 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY1MDBjMWZiZTk3NDJjODg5NDRkNjk2ZjE1ZTRkNDdo+wKH: 00:16:49.938 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:16:49.938 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:49.938 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:49.938 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:49.938 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:49.938 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:49.938 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:49.938 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.938 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.938 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.938 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:49.938 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:49.938 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:49.938 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:49.938 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:49.938 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:49.938 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:49.938 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:49.938 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:49.938 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:49.938 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:49.938 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:49.938 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.938 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.506 nvme0n1 00:16:50.506 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.506 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.506 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.506 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:50.506 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.506 10:55:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.506 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.506 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.506 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.506 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.506 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.506 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:50.506 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:16:50.506 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:50.506 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:50.506 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:50.506 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:50.506 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODA3YzBiMmFiZWE2N2RhZGNhYzliNDEyZGQ2NzVjMzY3OTQ3YjVkNzk0NmRiYTE0MjU0MDliYjI2NzY5NWU3NILi6fE=: 00:16:50.506 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:50.506 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:50.506 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:50.506 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODA3YzBiMmFiZWE2N2RhZGNhYzliNDEyZGQ2NzVjMzY3OTQ3YjVkNzk0NmRiYTE0MjU0MDliYjI2NzY5NWU3NILi6fE=: 00:16:50.506 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:50.506 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:16:50.506 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:50.506 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:50.506 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:50.506 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:50.506 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:50.506 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:50.506 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.506 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.506 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.506 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:50.506 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:50.506 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:50.506 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:50.506 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.506 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.506 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:50.506 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.506 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:50.506 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:50.506 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:50.506 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:50.506 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.506 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.073 nvme0n1 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTEzZGJhZGEzN2QyNDAwZWFhMTAwODZhMTBhNTFiNziQxU9m: 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA0NmFmMTlkNjgyNTcxNTVjNWU5YWYwMGEzZjU0MjIxMDZhNjc4Yzk4NGFiNGVkMTYxNDY4MzJiYzk4MTkwNGz4La8=: 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTEzZGJhZGEzN2QyNDAwZWFhMTAwODZhMTBhNTFiNziQxU9m: 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA0NmFmMTlkNjgyNTcxNTVjNWU5YWYwMGEzZjU0MjIxMDZhNjc4Yzk4NGFiNGVkMTYxNDY4MzJiYzk4MTkwNGz4La8=: ]] 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA0NmFmMTlkNjgyNTcxNTVjNWU5YWYwMGEzZjU0MjIxMDZhNjc4Yzk4NGFiNGVkMTYxNDY4MzJiYzk4MTkwNGz4La8=: 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.073 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.331 nvme0n1 00:16:51.331 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.331 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:51.331 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:51.331 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.331 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.331 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.331 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.331 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:51.331 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.331 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.331 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.331 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:51.331 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:16:51.331 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:51.331 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:51.331 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:51.331 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:51.331 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODllNzg0YTNmNzMzZDY1OTQ4ODY0YjRjOWExZGJlMDQyNzU5ODMxZmQ1YzI5Njlj2LsPoQ==: 00:16:51.331 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: 00:16:51.331 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:51.331 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:51.331 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODllNzg0YTNmNzMzZDY1OTQ4ODY0YjRjOWExZGJlMDQyNzU5ODMxZmQ1YzI5Njlj2LsPoQ==: 00:16:51.331 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: ]] 00:16:51.331 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: 00:16:51.331 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:16:51.331 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:51.331 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:51.331 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:51.331 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:51.331 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:51.331 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:51.331 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.331 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.331 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.331 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:51.331 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:51.331 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:51.331 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:51.331 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:51.331 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:51.331 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:51.331 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:51.331 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:51.331 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:51.332 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:51.332 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.332 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.332 10:55:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.332 nvme0n1 00:16:51.332 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.332 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:51.332 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.332 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.332 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:51.332 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.600 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.600 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:51.600 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.600 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.600 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.600 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:51.600 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:16:51.600 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:51.600 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:51.600 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:51.600 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:51.600 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Q2OTc1ZDYyMTViODk0YzczZjk3ODQwZjYxZjAyNTF5t7NX: 00:16:51.600 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjE3YmQyMTA2ZmJlZWFiOTk1ZTBkZTVkNjVhNGRmYjGZMBxR: 00:16:51.600 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:51.600 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:51.600 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Q2OTc1ZDYyMTViODk0YzczZjk3ODQwZjYxZjAyNTF5t7NX: 00:16:51.600 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjE3YmQyMTA2ZmJlZWFiOTk1ZTBkZTVkNjVhNGRmYjGZMBxR: ]] 00:16:51.600 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjE3YmQyMTA2ZmJlZWFiOTk1ZTBkZTVkNjVhNGRmYjGZMBxR: 00:16:51.600 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:16:51.600 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:51.600 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:51.600 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:51.600 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.601 nvme0n1 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg0YTQ5YTQ1ODk2YWI3NDFkMmFmYWY3YTBlYTQ0NDE3OTg2ZDhmNTk1YTUwZTE01nVw9g==: 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY1MDBjMWZiZTk3NDJjODg5NDRkNjk2ZjE1ZTRkNDdo+wKH: 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg0YTQ5YTQ1ODk2YWI3NDFkMmFmYWY3YTBlYTQ0NDE3OTg2ZDhmNTk1YTUwZTE01nVw9g==: 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY1MDBjMWZiZTk3NDJjODg5NDRkNjk2ZjE1ZTRkNDdo+wKH: ]] 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY1MDBjMWZiZTk3NDJjODg5NDRkNjk2ZjE1ZTRkNDdo+wKH: 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.601 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.860 nvme0n1 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODA3YzBiMmFiZWE2N2RhZGNhYzliNDEyZGQ2NzVjMzY3OTQ3YjVkNzk0NmRiYTE0MjU0MDliYjI2NzY5NWU3NILi6fE=: 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODA3YzBiMmFiZWE2N2RhZGNhYzliNDEyZGQ2NzVjMzY3OTQ3YjVkNzk0NmRiYTE0MjU0MDliYjI2NzY5NWU3NILi6fE=: 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.860 nvme0n1 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.860 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTEzZGJhZGEzN2QyNDAwZWFhMTAwODZhMTBhNTFiNziQxU9m: 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA0NmFmMTlkNjgyNTcxNTVjNWU5YWYwMGEzZjU0MjIxMDZhNjc4Yzk4NGFiNGVkMTYxNDY4MzJiYzk4MTkwNGz4La8=: 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTEzZGJhZGEzN2QyNDAwZWFhMTAwODZhMTBhNTFiNziQxU9m: 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA0NmFmMTlkNjgyNTcxNTVjNWU5YWYwMGEzZjU0MjIxMDZhNjc4Yzk4NGFiNGVkMTYxNDY4MzJiYzk4MTkwNGz4La8=: ]] 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA0NmFmMTlkNjgyNTcxNTVjNWU5YWYwMGEzZjU0MjIxMDZhNjc4Yzk4NGFiNGVkMTYxNDY4MzJiYzk4MTkwNGz4La8=: 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.118 nvme0n1 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.118 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.376 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.376 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:52.376 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:16:52.376 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:52.376 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:52.376 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:52.376 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:52.376 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODllNzg0YTNmNzMzZDY1OTQ4ODY0YjRjOWExZGJlMDQyNzU5ODMxZmQ1YzI5Njlj2LsPoQ==: 00:16:52.376 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: 00:16:52.376 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:52.376 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:52.376 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODllNzg0YTNmNzMzZDY1OTQ4ODY0YjRjOWExZGJlMDQyNzU5ODMxZmQ1YzI5Njlj2LsPoQ==: 00:16:52.377 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: ]] 00:16:52.377 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: 00:16:52.377 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:16:52.377 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:52.377 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:52.377 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:52.377 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:52.377 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:52.377 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:52.377 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.377 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.377 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.377 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:52.377 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:52.377 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:52.377 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:52.377 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:52.377 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:52.377 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:52.377 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:52.377 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:52.377 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:52.377 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:52.377 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.377 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.377 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.377 nvme0n1 00:16:52.377 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.377 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:52.377 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.377 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:52.377 10:55:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.377 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.377 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.377 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.377 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.377 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.377 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.377 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:52.377 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:16:52.377 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:52.377 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:52.377 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:52.377 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:52.377 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Q2OTc1ZDYyMTViODk0YzczZjk3ODQwZjYxZjAyNTF5t7NX: 00:16:52.377 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjE3YmQyMTA2ZmJlZWFiOTk1ZTBkZTVkNjVhNGRmYjGZMBxR: 00:16:52.377 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:52.377 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:52.377 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Q2OTc1ZDYyMTViODk0YzczZjk3ODQwZjYxZjAyNTF5t7NX: 00:16:52.377 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjE3YmQyMTA2ZmJlZWFiOTk1ZTBkZTVkNjVhNGRmYjGZMBxR: ]] 00:16:52.377 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjE3YmQyMTA2ZmJlZWFiOTk1ZTBkZTVkNjVhNGRmYjGZMBxR: 00:16:52.377 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:16:52.377 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:52.377 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:52.377 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:52.377 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:52.377 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:52.377 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:52.377 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.377 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.377 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.377 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:52.377 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:52.377 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:52.377 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:52.377 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:52.377 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:52.377 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:52.377 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:52.377 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:52.377 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:52.377 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:52.377 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.377 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.377 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.635 nvme0n1 00:16:52.635 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.635 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:52.635 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.635 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.635 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:52.635 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.635 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.635 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.635 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.635 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.635 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.635 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:52.635 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:16:52.636 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:52.636 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:52.636 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:52.636 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:52.636 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg0YTQ5YTQ1ODk2YWI3NDFkMmFmYWY3YTBlYTQ0NDE3OTg2ZDhmNTk1YTUwZTE01nVw9g==: 00:16:52.636 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY1MDBjMWZiZTk3NDJjODg5NDRkNjk2ZjE1ZTRkNDdo+wKH: 00:16:52.636 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:52.636 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:52.636 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg0YTQ5YTQ1ODk2YWI3NDFkMmFmYWY3YTBlYTQ0NDE3OTg2ZDhmNTk1YTUwZTE01nVw9g==: 00:16:52.636 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY1MDBjMWZiZTk3NDJjODg5NDRkNjk2ZjE1ZTRkNDdo+wKH: ]] 00:16:52.636 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY1MDBjMWZiZTk3NDJjODg5NDRkNjk2ZjE1ZTRkNDdo+wKH: 00:16:52.636 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:16:52.636 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:52.636 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:52.636 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:52.636 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:52.636 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:52.636 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:52.636 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.636 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.636 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.636 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:52.636 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:52.636 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:52.636 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:52.636 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:52.636 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:52.636 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:52.636 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:52.636 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:52.636 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:52.636 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:52.636 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:52.636 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.636 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.894 nvme0n1 00:16:52.894 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.894 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:52.894 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.894 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:52.894 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.894 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.894 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.894 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.894 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.894 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.894 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.894 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:52.894 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:16:52.894 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:52.894 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:52.894 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:52.894 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:52.894 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODA3YzBiMmFiZWE2N2RhZGNhYzliNDEyZGQ2NzVjMzY3OTQ3YjVkNzk0NmRiYTE0MjU0MDliYjI2NzY5NWU3NILi6fE=: 00:16:52.894 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:52.894 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:52.895 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:52.895 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODA3YzBiMmFiZWE2N2RhZGNhYzliNDEyZGQ2NzVjMzY3OTQ3YjVkNzk0NmRiYTE0MjU0MDliYjI2NzY5NWU3NILi6fE=: 00:16:52.895 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:52.895 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:16:52.895 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:52.895 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:52.895 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:52.895 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:52.895 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:52.895 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:52.895 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.895 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.895 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.895 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:52.895 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:52.895 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:52.895 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:52.895 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:52.895 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:52.895 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:52.895 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:52.895 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:52.895 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:52.895 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:52.895 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:52.895 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.895 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.895 nvme0n1 00:16:52.895 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.895 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:52.895 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:52.895 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.895 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.895 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.153 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.153 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:53.153 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.153 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.153 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.153 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:53.153 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:53.153 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:16:53.153 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:53.154 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:53.154 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:53.154 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:53.154 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTEzZGJhZGEzN2QyNDAwZWFhMTAwODZhMTBhNTFiNziQxU9m: 00:16:53.154 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA0NmFmMTlkNjgyNTcxNTVjNWU5YWYwMGEzZjU0MjIxMDZhNjc4Yzk4NGFiNGVkMTYxNDY4MzJiYzk4MTkwNGz4La8=: 00:16:53.154 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:53.154 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:53.154 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTEzZGJhZGEzN2QyNDAwZWFhMTAwODZhMTBhNTFiNziQxU9m: 00:16:53.154 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA0NmFmMTlkNjgyNTcxNTVjNWU5YWYwMGEzZjU0MjIxMDZhNjc4Yzk4NGFiNGVkMTYxNDY4MzJiYzk4MTkwNGz4La8=: ]] 00:16:53.154 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA0NmFmMTlkNjgyNTcxNTVjNWU5YWYwMGEzZjU0MjIxMDZhNjc4Yzk4NGFiNGVkMTYxNDY4MzJiYzk4MTkwNGz4La8=: 00:16:53.154 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:16:53.154 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:53.154 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:53.154 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:53.154 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:53.154 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:53.154 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:53.154 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.154 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.154 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.154 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:53.154 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:53.154 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:53.154 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:53.154 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:53.154 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:53.154 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:53.154 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:53.154 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:53.154 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:53.154 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:53.154 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.154 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.154 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.154 nvme0n1 00:16:53.154 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.154 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:53.154 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:53.154 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.154 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.412 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.412 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.412 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:53.412 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.412 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.412 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.412 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:53.412 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:16:53.412 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:53.412 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:53.412 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:53.412 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:53.412 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODllNzg0YTNmNzMzZDY1OTQ4ODY0YjRjOWExZGJlMDQyNzU5ODMxZmQ1YzI5Njlj2LsPoQ==: 00:16:53.412 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: 00:16:53.412 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:53.412 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:53.412 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODllNzg0YTNmNzMzZDY1OTQ4ODY0YjRjOWExZGJlMDQyNzU5ODMxZmQ1YzI5Njlj2LsPoQ==: 00:16:53.412 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: ]] 00:16:53.412 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: 00:16:53.412 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:16:53.412 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:53.412 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:53.412 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:53.412 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:53.412 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:53.412 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:53.412 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.412 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.412 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.412 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:53.412 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:53.412 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:53.412 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:53.412 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:53.412 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:53.412 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:53.413 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:53.413 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:53.413 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:53.413 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:53.413 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.413 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.413 10:55:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.413 nvme0n1 00:16:53.413 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.413 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:53.413 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:53.413 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.413 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.413 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Q2OTc1ZDYyMTViODk0YzczZjk3ODQwZjYxZjAyNTF5t7NX: 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjE3YmQyMTA2ZmJlZWFiOTk1ZTBkZTVkNjVhNGRmYjGZMBxR: 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Q2OTc1ZDYyMTViODk0YzczZjk3ODQwZjYxZjAyNTF5t7NX: 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjE3YmQyMTA2ZmJlZWFiOTk1ZTBkZTVkNjVhNGRmYjGZMBxR: ]] 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjE3YmQyMTA2ZmJlZWFiOTk1ZTBkZTVkNjVhNGRmYjGZMBxR: 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.672 nvme0n1 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:53.672 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg0YTQ5YTQ1ODk2YWI3NDFkMmFmYWY3YTBlYTQ0NDE3OTg2ZDhmNTk1YTUwZTE01nVw9g==: 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY1MDBjMWZiZTk3NDJjODg5NDRkNjk2ZjE1ZTRkNDdo+wKH: 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg0YTQ5YTQ1ODk2YWI3NDFkMmFmYWY3YTBlYTQ0NDE3OTg2ZDhmNTk1YTUwZTE01nVw9g==: 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY1MDBjMWZiZTk3NDJjODg5NDRkNjk2ZjE1ZTRkNDdo+wKH: ]] 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY1MDBjMWZiZTk3NDJjODg5NDRkNjk2ZjE1ZTRkNDdo+wKH: 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.931 nvme0n1 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:53.931 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODA3YzBiMmFiZWE2N2RhZGNhYzliNDEyZGQ2NzVjMzY3OTQ3YjVkNzk0NmRiYTE0MjU0MDliYjI2NzY5NWU3NILi6fE=: 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODA3YzBiMmFiZWE2N2RhZGNhYzliNDEyZGQ2NzVjMzY3OTQ3YjVkNzk0NmRiYTE0MjU0MDliYjI2NzY5NWU3NILi6fE=: 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.190 nvme0n1 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:54.190 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.449 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.449 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.449 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:54.449 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.449 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.449 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.449 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:54.449 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:54.449 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:16:54.449 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:54.449 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:54.449 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:54.449 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:54.449 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTEzZGJhZGEzN2QyNDAwZWFhMTAwODZhMTBhNTFiNziQxU9m: 00:16:54.449 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA0NmFmMTlkNjgyNTcxNTVjNWU5YWYwMGEzZjU0MjIxMDZhNjc4Yzk4NGFiNGVkMTYxNDY4MzJiYzk4MTkwNGz4La8=: 00:16:54.449 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:54.449 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:54.449 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTEzZGJhZGEzN2QyNDAwZWFhMTAwODZhMTBhNTFiNziQxU9m: 00:16:54.449 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA0NmFmMTlkNjgyNTcxNTVjNWU5YWYwMGEzZjU0MjIxMDZhNjc4Yzk4NGFiNGVkMTYxNDY4MzJiYzk4MTkwNGz4La8=: ]] 00:16:54.449 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA0NmFmMTlkNjgyNTcxNTVjNWU5YWYwMGEzZjU0MjIxMDZhNjc4Yzk4NGFiNGVkMTYxNDY4MzJiYzk4MTkwNGz4La8=: 00:16:54.449 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:16:54.449 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:54.449 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:54.449 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:54.449 10:55:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:54.449 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:54.449 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:54.449 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.449 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.449 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.449 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:54.449 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:54.450 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:54.450 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:54.450 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:54.450 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:54.450 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:54.450 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:54.450 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:54.450 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:54.450 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:54.450 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.450 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.450 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.709 nvme0n1 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODllNzg0YTNmNzMzZDY1OTQ4ODY0YjRjOWExZGJlMDQyNzU5ODMxZmQ1YzI5Njlj2LsPoQ==: 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODllNzg0YTNmNzMzZDY1OTQ4ODY0YjRjOWExZGJlMDQyNzU5ODMxZmQ1YzI5Njlj2LsPoQ==: 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: ]] 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.709 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.335 nvme0n1 00:16:55.335 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.335 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:55.335 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.335 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:55.335 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.335 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.335 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.335 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:55.335 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.335 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.335 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.335 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:55.335 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:16:55.335 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:55.335 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:55.335 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:55.335 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:55.335 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Q2OTc1ZDYyMTViODk0YzczZjk3ODQwZjYxZjAyNTF5t7NX: 00:16:55.335 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjE3YmQyMTA2ZmJlZWFiOTk1ZTBkZTVkNjVhNGRmYjGZMBxR: 00:16:55.335 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:55.335 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:55.335 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Q2OTc1ZDYyMTViODk0YzczZjk3ODQwZjYxZjAyNTF5t7NX: 00:16:55.335 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjE3YmQyMTA2ZmJlZWFiOTk1ZTBkZTVkNjVhNGRmYjGZMBxR: ]] 00:16:55.335 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjE3YmQyMTA2ZmJlZWFiOTk1ZTBkZTVkNjVhNGRmYjGZMBxR: 00:16:55.335 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:16:55.335 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:55.335 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:55.335 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:55.335 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:55.335 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:55.336 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:55.336 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.336 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.336 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.336 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:55.336 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:55.336 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:55.336 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:55.336 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:55.336 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:55.336 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:55.336 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:55.336 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:55.336 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:55.336 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:55.336 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.336 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.336 10:55:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.595 nvme0n1 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg0YTQ5YTQ1ODk2YWI3NDFkMmFmYWY3YTBlYTQ0NDE3OTg2ZDhmNTk1YTUwZTE01nVw9g==: 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY1MDBjMWZiZTk3NDJjODg5NDRkNjk2ZjE1ZTRkNDdo+wKH: 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg0YTQ5YTQ1ODk2YWI3NDFkMmFmYWY3YTBlYTQ0NDE3OTg2ZDhmNTk1YTUwZTE01nVw9g==: 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY1MDBjMWZiZTk3NDJjODg5NDRkNjk2ZjE1ZTRkNDdo+wKH: ]] 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY1MDBjMWZiZTk3NDJjODg5NDRkNjk2ZjE1ZTRkNDdo+wKH: 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.595 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.915 nvme0n1 00:16:55.915 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.915 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:55.915 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.915 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:55.915 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.915 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.915 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.915 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:55.915 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.915 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.915 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.915 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:55.915 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:16:55.915 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:55.915 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:55.915 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:55.915 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:55.915 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODA3YzBiMmFiZWE2N2RhZGNhYzliNDEyZGQ2NzVjMzY3OTQ3YjVkNzk0NmRiYTE0MjU0MDliYjI2NzY5NWU3NILi6fE=: 00:16:55.915 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:55.915 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:55.915 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:55.915 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODA3YzBiMmFiZWE2N2RhZGNhYzliNDEyZGQ2NzVjMzY3OTQ3YjVkNzk0NmRiYTE0MjU0MDliYjI2NzY5NWU3NILi6fE=: 00:16:55.915 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:55.915 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:16:55.915 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:55.915 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:55.915 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:55.915 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:55.915 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:56.175 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:56.175 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.175 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.175 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.175 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:56.175 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:56.175 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:56.175 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:56.175 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:56.175 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:56.175 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:56.175 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:56.175 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:56.175 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:56.175 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:56.175 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:56.175 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.176 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.434 nvme0n1 00:16:56.434 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.434 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:56.434 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:56.434 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.434 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.434 10:55:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.434 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.434 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:56.434 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.434 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.434 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.434 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:56.434 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:56.434 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:16:56.434 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:56.434 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:56.434 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:56.434 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:56.434 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTEzZGJhZGEzN2QyNDAwZWFhMTAwODZhMTBhNTFiNziQxU9m: 00:16:56.434 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA0NmFmMTlkNjgyNTcxNTVjNWU5YWYwMGEzZjU0MjIxMDZhNjc4Yzk4NGFiNGVkMTYxNDY4MzJiYzk4MTkwNGz4La8=: 00:16:56.434 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:56.434 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:56.434 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTEzZGJhZGEzN2QyNDAwZWFhMTAwODZhMTBhNTFiNziQxU9m: 00:16:56.434 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA0NmFmMTlkNjgyNTcxNTVjNWU5YWYwMGEzZjU0MjIxMDZhNjc4Yzk4NGFiNGVkMTYxNDY4MzJiYzk4MTkwNGz4La8=: ]] 00:16:56.434 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA0NmFmMTlkNjgyNTcxNTVjNWU5YWYwMGEzZjU0MjIxMDZhNjc4Yzk4NGFiNGVkMTYxNDY4MzJiYzk4MTkwNGz4La8=: 00:16:56.434 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:16:56.434 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:56.434 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:56.434 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:56.434 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:56.434 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:56.434 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:56.434 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.434 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.434 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.434 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:56.434 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:56.434 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:56.434 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:56.434 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:56.434 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:56.434 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:56.434 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:56.434 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:56.434 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:56.434 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:56.434 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.434 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.434 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.001 nvme0n1 00:16:57.001 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.001 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:57.001 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:57.001 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.001 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.001 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.001 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.001 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:57.001 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.001 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.001 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.001 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:57.001 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:16:57.001 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:57.001 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:57.001 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:57.001 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:57.001 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODllNzg0YTNmNzMzZDY1OTQ4ODY0YjRjOWExZGJlMDQyNzU5ODMxZmQ1YzI5Njlj2LsPoQ==: 00:16:57.001 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: 00:16:57.001 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:57.002 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:57.002 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODllNzg0YTNmNzMzZDY1OTQ4ODY0YjRjOWExZGJlMDQyNzU5ODMxZmQ1YzI5Njlj2LsPoQ==: 00:16:57.002 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: ]] 00:16:57.002 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: 00:16:57.002 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:16:57.002 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:57.002 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:57.002 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:57.002 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:57.002 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:57.002 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:57.002 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.002 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.002 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.002 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:57.002 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:57.002 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:57.002 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:57.002 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:57.002 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:57.002 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:57.002 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:57.002 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:57.002 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:57.002 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:57.002 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.002 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.002 10:55:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.569 nvme0n1 00:16:57.569 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.569 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:57.569 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:57.569 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.569 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.569 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.826 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.826 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:57.827 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.827 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.827 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.827 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:57.827 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:16:57.827 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:57.827 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:57.827 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:57.827 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:57.827 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2Q2OTc1ZDYyMTViODk0YzczZjk3ODQwZjYxZjAyNTF5t7NX: 00:16:57.827 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjE3YmQyMTA2ZmJlZWFiOTk1ZTBkZTVkNjVhNGRmYjGZMBxR: 00:16:57.827 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:57.827 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:57.827 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2Q2OTc1ZDYyMTViODk0YzczZjk3ODQwZjYxZjAyNTF5t7NX: 00:16:57.827 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjE3YmQyMTA2ZmJlZWFiOTk1ZTBkZTVkNjVhNGRmYjGZMBxR: ]] 00:16:57.827 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjE3YmQyMTA2ZmJlZWFiOTk1ZTBkZTVkNjVhNGRmYjGZMBxR: 00:16:57.827 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:16:57.827 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:57.827 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:57.827 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:57.827 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:57.827 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:57.827 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:57.827 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.827 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.827 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.827 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:57.827 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:57.827 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:57.827 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:57.827 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:57.827 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:57.827 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:57.827 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:57.827 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:57.827 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:57.827 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:57.827 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.827 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.827 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.399 nvme0n1 00:16:58.399 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.399 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:58.399 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:58.399 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.399 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.399 10:55:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.399 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.399 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:58.399 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.399 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.399 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.399 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:58.399 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:16:58.399 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:58.399 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:58.399 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:58.399 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:58.399 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg0YTQ5YTQ1ODk2YWI3NDFkMmFmYWY3YTBlYTQ0NDE3OTg2ZDhmNTk1YTUwZTE01nVw9g==: 00:16:58.399 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY1MDBjMWZiZTk3NDJjODg5NDRkNjk2ZjE1ZTRkNDdo+wKH: 00:16:58.399 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:58.399 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:58.399 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg0YTQ5YTQ1ODk2YWI3NDFkMmFmYWY3YTBlYTQ0NDE3OTg2ZDhmNTk1YTUwZTE01nVw9g==: 00:16:58.399 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY1MDBjMWZiZTk3NDJjODg5NDRkNjk2ZjE1ZTRkNDdo+wKH: ]] 00:16:58.399 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY1MDBjMWZiZTk3NDJjODg5NDRkNjk2ZjE1ZTRkNDdo+wKH: 00:16:58.399 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:16:58.399 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:58.399 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:58.399 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:58.399 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:58.399 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:58.399 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:58.399 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.399 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.399 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.400 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:58.400 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:58.400 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:58.400 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:58.400 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:58.400 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:58.400 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:58.400 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:58.400 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:58.400 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:58.400 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:58.400 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:58.400 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.400 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.967 nvme0n1 00:16:58.967 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.967 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:58.967 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.967 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:58.967 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.967 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.225 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.225 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:59.225 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.225 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.225 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.225 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:59.225 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:16:59.225 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:59.225 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:59.225 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:59.225 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:59.225 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODA3YzBiMmFiZWE2N2RhZGNhYzliNDEyZGQ2NzVjMzY3OTQ3YjVkNzk0NmRiYTE0MjU0MDliYjI2NzY5NWU3NILi6fE=: 00:16:59.225 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:59.225 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:59.225 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:59.225 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODA3YzBiMmFiZWE2N2RhZGNhYzliNDEyZGQ2NzVjMzY3OTQ3YjVkNzk0NmRiYTE0MjU0MDliYjI2NzY5NWU3NILi6fE=: 00:16:59.225 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:59.225 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:16:59.225 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:59.225 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:59.225 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:59.225 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:59.225 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:59.225 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:59.225 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.225 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.225 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.225 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:59.225 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:59.225 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:59.225 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:59.225 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:59.225 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:59.225 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:59.225 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:59.225 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:59.225 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:59.225 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:59.225 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:59.225 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.225 10:55:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.792 nvme0n1 00:16:59.792 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.792 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:59.792 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.792 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:59.792 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.792 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.792 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.792 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:59.792 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.792 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.792 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.792 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:59.792 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:59.792 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:59.792 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:59.792 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:59.792 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODllNzg0YTNmNzMzZDY1OTQ4ODY0YjRjOWExZGJlMDQyNzU5ODMxZmQ1YzI5Njlj2LsPoQ==: 00:16:59.792 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: 00:16:59.792 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:59.792 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:59.792 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODllNzg0YTNmNzMzZDY1OTQ4ODY0YjRjOWExZGJlMDQyNzU5ODMxZmQ1YzI5Njlj2LsPoQ==: 00:16:59.792 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: ]] 00:16:59.792 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2M4ZDRhN2MwNTA3ZjY3YTlkMWIzZjZiMzFjZDA3MGYwZjUyOTY2MTRlY2U3MGM4wlEZAw==: 00:16:59.792 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:59.792 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.792 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.792 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.792 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:16:59.792 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:59.792 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:59.792 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:59.792 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:59.792 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:59.793 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:59.793 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:59.793 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:59.793 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:59.793 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:59.793 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:16:59.793 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:16:59.793 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:16:59.793 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:59.793 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:59.793 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:59.793 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:59.793 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:16:59.793 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.793 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.793 request: 00:16:59.793 { 00:16:59.793 "name": "nvme0", 00:16:59.793 "trtype": "tcp", 00:16:59.793 "traddr": "10.0.0.1", 00:16:59.793 "adrfam": "ipv4", 00:16:59.793 "trsvcid": "4420", 00:16:59.793 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:16:59.793 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:16:59.793 "prchk_reftag": false, 00:16:59.793 "prchk_guard": false, 00:16:59.793 "hdgst": false, 00:16:59.793 "ddgst": false, 00:16:59.793 "method": "bdev_nvme_attach_controller", 00:16:59.793 "req_id": 1 00:16:59.793 } 00:16:59.793 Got JSON-RPC error response 00:16:59.793 response: 00:16:59.793 { 00:16:59.793 "code": -5, 00:16:59.793 "message": "Input/output error" 00:16:59.793 } 00:16:59.793 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:59.793 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:16:59.793 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:59.793 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:59.793 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:59.793 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:16:59.793 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.793 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.793 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:16:59.793 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.052 request: 00:17:00.052 { 00:17:00.052 "name": "nvme0", 00:17:00.052 "trtype": "tcp", 00:17:00.052 "traddr": "10.0.0.1", 00:17:00.052 "adrfam": "ipv4", 00:17:00.052 "trsvcid": "4420", 00:17:00.052 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:00.052 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:00.052 "prchk_reftag": false, 00:17:00.052 "prchk_guard": false, 00:17:00.052 "hdgst": false, 00:17:00.052 "ddgst": false, 00:17:00.052 "dhchap_key": "key2", 00:17:00.052 "method": "bdev_nvme_attach_controller", 00:17:00.052 "req_id": 1 00:17:00.052 } 00:17:00.052 Got JSON-RPC error response 00:17:00.052 response: 00:17:00.052 { 00:17:00.052 "code": -5, 00:17:00.052 "message": "Input/output error" 00:17:00.052 } 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.052 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.052 request: 00:17:00.052 { 00:17:00.053 "name": "nvme0", 00:17:00.053 "trtype": "tcp", 00:17:00.053 "traddr": "10.0.0.1", 00:17:00.053 "adrfam": "ipv4", 00:17:00.053 "trsvcid": "4420", 00:17:00.053 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:00.053 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:00.053 "prchk_reftag": false, 00:17:00.053 "prchk_guard": false, 00:17:00.053 "hdgst": false, 00:17:00.053 "ddgst": false, 00:17:00.053 "dhchap_key": "key1", 00:17:00.053 "dhchap_ctrlr_key": "ckey2", 00:17:00.053 "method": "bdev_nvme_attach_controller", 00:17:00.053 "req_id": 1 00:17:00.053 } 00:17:00.053 Got JSON-RPC error response 00:17:00.053 response: 00:17:00.053 { 00:17:00.053 "code": -5, 00:17:00.053 "message": "Input/output error" 00:17:00.053 } 00:17:00.053 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:00.053 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:17:00.053 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:00.053 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:00.053 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:00.053 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:17:00.053 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:17:00.053 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:17:00.053 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:00.053 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:17:00.053 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:00.053 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:17:00.053 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:00.053 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:00.053 rmmod nvme_tcp 00:17:00.053 rmmod nvme_fabrics 00:17:00.053 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:00.053 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:17:00.053 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:17:00.053 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 78012 ']' 00:17:00.053 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 78012 00:17:00.053 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 78012 ']' 00:17:00.053 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 78012 00:17:00.053 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:17:00.053 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:00.053 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78012 00:17:00.312 killing process with pid 78012 00:17:00.312 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:00.312 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:00.312 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78012' 00:17:00.312 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 78012 00:17:00.312 10:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 78012 00:17:00.312 10:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:00.312 10:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:00.312 10:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:00.312 10:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:00.312 10:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:00.312 10:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.312 10:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:00.312 10:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.570 10:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:00.570 10:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:00.570 10:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:00.570 10:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:17:00.570 10:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:17:00.570 10:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:17:00.570 10:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:00.570 10:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:00.570 10:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:00.570 10:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:00.570 10:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:17:00.570 10:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:17:00.570 10:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:01.135 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:01.394 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:01.394 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:01.394 10:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Z2I /tmp/spdk.key-null.Bm6 /tmp/spdk.key-sha256.jrv /tmp/spdk.key-sha384.sWG /tmp/spdk.key-sha512.06Q /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:17:01.394 10:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:01.652 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:01.652 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:01.652 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:01.652 00:17:01.652 real 0m35.032s 00:17:01.652 user 0m31.811s 00:17:01.652 sys 0m3.851s 00:17:01.652 ************************************ 00:17:01.652 END TEST nvmf_auth_host 00:17:01.652 ************************************ 00:17:01.652 10:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:01.652 10:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.911 10:55:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:17:01.911 10:55:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:01.911 10:55:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:01.911 10:55:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:01.911 10:55:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.911 ************************************ 00:17:01.911 START TEST nvmf_digest 00:17:01.911 ************************************ 00:17:01.911 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:01.911 * Looking for test storage... 00:17:01.911 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:01.911 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:01.911 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:17:01.911 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:01.911 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:01.911 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:01.911 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:01.911 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:01.911 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:01.911 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:01.911 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:01.911 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:01.911 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:01.911 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:17:01.911 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=bb4b8bd3-cfb4-4368-bf29-91254747069c 00:17:01.911 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:01.911 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:01.911 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:01.911 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:01.911 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:01.911 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:01.911 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:01.911 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:01.911 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.911 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.911 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:01.912 Cannot find device "nvmf_tgt_br" 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # true 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:01.912 Cannot find device "nvmf_tgt_br2" 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # true 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:01.912 Cannot find device "nvmf_tgt_br" 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # true 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:01.912 Cannot find device "nvmf_tgt_br2" 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # true 00:17:01.912 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:02.170 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:02.170 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:02.170 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:02.170 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:17:02.170 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:02.170 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:02.170 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:17:02.170 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:02.170 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:02.170 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:02.170 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:02.170 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:02.170 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:02.170 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:02.170 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:02.170 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:02.170 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:02.170 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:02.170 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:02.170 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:02.170 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:02.170 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:02.170 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:02.170 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:02.170 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:02.429 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:02.429 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:02.429 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:02.429 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:02.429 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:02.429 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:02.429 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:02.429 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:17:02.429 00:17:02.429 --- 10.0.0.2 ping statistics --- 00:17:02.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.429 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:17:02.429 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:02.429 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:02.429 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:17:02.429 00:17:02.429 --- 10.0.0.3 ping statistics --- 00:17:02.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.429 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:17:02.429 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:02.429 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:02.429 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:17:02.429 00:17:02.429 --- 10.0.0.1 ping statistics --- 00:17:02.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.429 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:17:02.429 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:02.429 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:17:02.429 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:02.429 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:02.429 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:02.429 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:02.429 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:02.429 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:02.429 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:02.429 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:02.429 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:17:02.429 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:17:02.429 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:02.429 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:02.429 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:02.429 ************************************ 00:17:02.429 START TEST nvmf_digest_clean 00:17:02.429 ************************************ 00:17:02.429 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:17:02.429 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:17:02.429 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:17:02.429 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:17:02.429 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:17:02.429 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:17:02.429 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:02.429 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:02.429 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:02.429 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=79583 00:17:02.429 10:55:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:02.429 10:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 79583 00:17:02.429 10:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 79583 ']' 00:17:02.429 10:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.429 10:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:02.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.429 10:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.429 10:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:02.429 10:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:02.429 [2024-07-25 10:55:32.064417] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:02.429 [2024-07-25 10:55:32.065362] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:02.688 [2024-07-25 10:55:32.207315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.688 [2024-07-25 10:55:32.329140] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:02.688 [2024-07-25 10:55:32.329480] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:02.688 [2024-07-25 10:55:32.329609] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:02.688 [2024-07-25 10:55:32.329729] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:02.688 [2024-07-25 10:55:32.329898] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:02.688 [2024-07-25 10:55:32.330088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.649 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:03.649 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:17:03.649 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:03.649 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:03.649 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:03.649 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:03.649 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:17:03.649 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:17:03.649 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:17:03.649 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.649 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:03.649 [2024-07-25 10:55:33.177525] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:03.649 null0 00:17:03.649 [2024-07-25 10:55:33.228092] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:03.649 [2024-07-25 10:55:33.252296] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:03.649 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.649 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:17:03.649 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:03.649 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:03.649 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:03.649 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:03.649 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:03.649 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:03.649 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79615 00:17:03.649 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79615 /var/tmp/bperf.sock 00:17:03.649 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:03.649 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 79615 ']' 00:17:03.649 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:03.649 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:03.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:03.649 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:03.649 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:03.649 10:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:03.649 [2024-07-25 10:55:33.319202] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:03.649 [2024-07-25 10:55:33.319305] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79615 ] 00:17:03.910 [2024-07-25 10:55:33.460314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.910 [2024-07-25 10:55:33.575899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:04.848 10:55:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:04.848 10:55:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:17:04.848 10:55:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:04.848 10:55:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:04.848 10:55:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:04.848 [2024-07-25 10:55:34.575092] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:05.107 10:55:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:05.107 10:55:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:05.367 nvme0n1 00:17:05.367 10:55:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:05.367 10:55:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:05.367 Running I/O for 2 seconds... 00:17:07.903 00:17:07.903 Latency(us) 00:17:07.903 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.903 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:07.903 nvme0n1 : 2.01 15801.21 61.72 0.00 0.00 8095.08 6613.18 22282.24 00:17:07.903 =================================================================================================================== 00:17:07.903 Total : 15801.21 61.72 0.00 0.00 8095.08 6613.18 22282.24 00:17:07.903 0 00:17:07.903 10:55:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:07.903 10:55:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:07.903 10:55:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:07.903 10:55:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:07.903 10:55:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:07.903 | select(.opcode=="crc32c") 00:17:07.903 | "\(.module_name) \(.executed)"' 00:17:07.903 10:55:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:07.903 10:55:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:07.903 10:55:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:07.903 10:55:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:07.903 10:55:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79615 00:17:07.903 10:55:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 79615 ']' 00:17:07.903 10:55:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 79615 00:17:07.903 10:55:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:17:07.903 10:55:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:07.903 10:55:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79615 00:17:07.903 killing process with pid 79615 00:17:07.903 Received shutdown signal, test time was about 2.000000 seconds 00:17:07.903 00:17:07.903 Latency(us) 00:17:07.903 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.903 =================================================================================================================== 00:17:07.903 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:07.903 10:55:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:07.903 10:55:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:07.903 10:55:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79615' 00:17:07.903 10:55:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 79615 00:17:07.903 10:55:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 79615 00:17:07.903 10:55:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:17:07.903 10:55:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:07.903 10:55:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:07.903 10:55:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:07.903 10:55:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:07.903 10:55:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:07.903 10:55:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:07.903 10:55:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79675 00:17:07.903 10:55:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79675 /var/tmp/bperf.sock 00:17:07.903 10:55:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:07.903 10:55:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 79675 ']' 00:17:07.903 10:55:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:07.903 10:55:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:07.903 10:55:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:07.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:07.903 10:55:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:07.903 10:55:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:07.903 [2024-07-25 10:55:37.639940] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:07.903 [2024-07-25 10:55:37.640063] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79675 ] 00:17:07.903 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:07.903 Zero copy mechanism will not be used. 00:17:08.162 [2024-07-25 10:55:37.778681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.162 [2024-07-25 10:55:37.880296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:09.099 10:55:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:09.099 10:55:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:17:09.099 10:55:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:09.099 10:55:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:09.099 10:55:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:09.358 [2024-07-25 10:55:38.841410] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:09.358 10:55:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:09.358 10:55:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:09.616 nvme0n1 00:17:09.616 10:55:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:09.616 10:55:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:09.617 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:09.617 Zero copy mechanism will not be used. 00:17:09.617 Running I/O for 2 seconds... 00:17:12.172 00:17:12.172 Latency(us) 00:17:12.172 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.172 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:12.172 nvme0n1 : 2.00 7485.98 935.75 0.00 0.00 2133.68 1757.56 3589.59 00:17:12.172 =================================================================================================================== 00:17:12.172 Total : 7485.98 935.75 0.00 0.00 2133.68 1757.56 3589.59 00:17:12.172 0 00:17:12.172 10:55:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:12.172 10:55:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:12.172 10:55:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:12.172 10:55:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:12.172 | select(.opcode=="crc32c") 00:17:12.172 | "\(.module_name) \(.executed)"' 00:17:12.172 10:55:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:12.172 10:55:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:12.172 10:55:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:12.172 10:55:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:12.172 10:55:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:12.172 10:55:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79675 00:17:12.172 10:55:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 79675 ']' 00:17:12.172 10:55:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 79675 00:17:12.172 10:55:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:17:12.172 10:55:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:12.172 10:55:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79675 00:17:12.172 killing process with pid 79675 00:17:12.172 Received shutdown signal, test time was about 2.000000 seconds 00:17:12.172 00:17:12.172 Latency(us) 00:17:12.172 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.172 =================================================================================================================== 00:17:12.172 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:12.172 10:55:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:12.172 10:55:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:12.172 10:55:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79675' 00:17:12.172 10:55:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 79675 00:17:12.172 10:55:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 79675 00:17:12.430 10:55:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:17:12.430 10:55:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:12.430 10:55:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:12.430 10:55:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:12.430 10:55:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:12.430 10:55:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:12.430 10:55:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:12.430 10:55:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79737 00:17:12.431 10:55:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:12.431 10:55:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79737 /var/tmp/bperf.sock 00:17:12.431 10:55:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 79737 ']' 00:17:12.431 10:55:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:12.431 10:55:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:12.431 10:55:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:12.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:12.431 10:55:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:12.431 10:55:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:12.431 [2024-07-25 10:55:42.013871] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:12.431 [2024-07-25 10:55:42.013960] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79737 ] 00:17:12.431 [2024-07-25 10:55:42.151810] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.689 [2024-07-25 10:55:42.266658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:13.255 10:55:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:13.255 10:55:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:17:13.255 10:55:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:13.255 10:55:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:13.255 10:55:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:13.837 [2024-07-25 10:55:43.301982] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:13.837 10:55:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:13.837 10:55:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:14.095 nvme0n1 00:17:14.095 10:55:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:14.095 10:55:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:14.095 Running I/O for 2 seconds... 00:17:16.624 00:17:16.624 Latency(us) 00:17:16.624 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.624 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:16.624 nvme0n1 : 2.00 15412.60 60.21 0.00 0.00 8297.11 2561.86 16324.42 00:17:16.624 =================================================================================================================== 00:17:16.624 Total : 15412.60 60.21 0.00 0.00 8297.11 2561.86 16324.42 00:17:16.624 0 00:17:16.624 10:55:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:16.624 10:55:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:16.624 10:55:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:16.624 10:55:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:16.625 10:55:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:16.625 | select(.opcode=="crc32c") 00:17:16.625 | "\(.module_name) \(.executed)"' 00:17:16.625 10:55:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:16.625 10:55:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:16.625 10:55:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:16.625 10:55:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:16.625 10:55:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79737 00:17:16.625 10:55:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 79737 ']' 00:17:16.625 10:55:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 79737 00:17:16.625 10:55:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:17:16.625 10:55:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:16.625 10:55:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79737 00:17:16.625 10:55:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:16.625 10:55:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:16.625 killing process with pid 79737 00:17:16.625 10:55:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79737' 00:17:16.625 Received shutdown signal, test time was about 2.000000 seconds 00:17:16.625 00:17:16.625 Latency(us) 00:17:16.625 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.625 =================================================================================================================== 00:17:16.625 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:16.625 10:55:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 79737 00:17:16.625 10:55:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 79737 00:17:16.883 10:55:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:17:16.883 10:55:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:16.883 10:55:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:16.883 10:55:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:16.883 10:55:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:16.883 10:55:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:16.883 10:55:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:16.883 10:55:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79796 00:17:16.883 10:55:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79796 /var/tmp/bperf.sock 00:17:16.883 10:55:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:16.883 10:55:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 79796 ']' 00:17:16.883 10:55:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:16.883 10:55:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:16.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:16.883 10:55:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:16.883 10:55:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:16.883 10:55:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:16.883 [2024-07-25 10:55:46.501479] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:16.883 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:16.883 Zero copy mechanism will not be used. 00:17:16.883 [2024-07-25 10:55:46.502350] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79796 ] 00:17:17.142 [2024-07-25 10:55:46.648165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.142 [2024-07-25 10:55:46.800832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:18.135 10:55:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:18.135 10:55:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:17:18.135 10:55:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:18.135 10:55:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:18.136 10:55:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:18.136 [2024-07-25 10:55:47.835409] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:18.395 10:55:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:18.395 10:55:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:18.653 nvme0n1 00:17:18.653 10:55:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:18.653 10:55:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:18.653 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:18.653 Zero copy mechanism will not be used. 00:17:18.653 Running I/O for 2 seconds... 00:17:21.185 00:17:21.185 Latency(us) 00:17:21.185 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.185 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:21.185 nvme0n1 : 2.00 5075.94 634.49 0.00 0.00 3145.78 2606.55 11141.12 00:17:21.185 =================================================================================================================== 00:17:21.185 Total : 5075.94 634.49 0.00 0.00 3145.78 2606.55 11141.12 00:17:21.185 0 00:17:21.185 10:55:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:21.185 10:55:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:21.185 10:55:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:21.185 | select(.opcode=="crc32c") 00:17:21.185 | "\(.module_name) \(.executed)"' 00:17:21.185 10:55:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:21.185 10:55:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:21.185 10:55:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:21.185 10:55:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:21.185 10:55:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:21.185 10:55:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:21.185 10:55:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79796 00:17:21.185 10:55:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 79796 ']' 00:17:21.185 10:55:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 79796 00:17:21.185 10:55:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:17:21.185 10:55:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:21.185 10:55:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79796 00:17:21.185 killing process with pid 79796 00:17:21.185 Received shutdown signal, test time was about 2.000000 seconds 00:17:21.185 00:17:21.185 Latency(us) 00:17:21.185 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.185 =================================================================================================================== 00:17:21.185 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:21.185 10:55:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:21.185 10:55:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:21.185 10:55:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79796' 00:17:21.185 10:55:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 79796 00:17:21.185 10:55:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 79796 00:17:21.443 10:55:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 79583 00:17:21.443 10:55:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 79583 ']' 00:17:21.443 10:55:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 79583 00:17:21.443 10:55:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:17:21.443 10:55:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:21.443 10:55:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79583 00:17:21.443 killing process with pid 79583 00:17:21.443 10:55:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:21.443 10:55:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:21.443 10:55:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79583' 00:17:21.443 10:55:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 79583 00:17:21.443 10:55:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 79583 00:17:21.701 ************************************ 00:17:21.701 END TEST nvmf_digest_clean 00:17:21.701 ************************************ 00:17:21.701 00:17:21.701 real 0m19.323s 00:17:21.701 user 0m36.839s 00:17:21.701 sys 0m5.355s 00:17:21.701 10:55:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:21.701 10:55:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:21.701 10:55:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:17:21.701 10:55:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:21.701 10:55:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:21.701 10:55:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:21.701 ************************************ 00:17:21.701 START TEST nvmf_digest_error 00:17:21.701 ************************************ 00:17:21.701 10:55:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:17:21.701 10:55:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:17:21.701 10:55:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:21.701 10:55:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:21.701 10:55:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:21.701 10:55:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=79885 00:17:21.701 10:55:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 79885 00:17:21.701 10:55:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:21.701 10:55:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 79885 ']' 00:17:21.701 10:55:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.701 10:55:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:21.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.701 10:55:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.701 10:55:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:21.701 10:55:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:21.701 [2024-07-25 10:55:51.427668] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:21.701 [2024-07-25 10:55:51.427763] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:21.959 [2024-07-25 10:55:51.563543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.218 [2024-07-25 10:55:51.705731] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:22.218 [2024-07-25 10:55:51.705804] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:22.218 [2024-07-25 10:55:51.705817] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:22.218 [2024-07-25 10:55:51.705825] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:22.218 [2024-07-25 10:55:51.705833] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:22.218 [2024-07-25 10:55:51.705871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.785 10:55:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:22.785 10:55:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:17:22.785 10:55:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:22.785 10:55:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:22.785 10:55:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:22.785 10:55:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:22.785 10:55:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:17:22.785 10:55:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.785 10:55:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:22.785 [2024-07-25 10:55:52.422583] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:17:22.785 10:55:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.785 10:55:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:17:22.785 10:55:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:17:22.786 10:55:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.786 10:55:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:22.786 [2024-07-25 10:55:52.505631] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:23.044 null0 00:17:23.044 [2024-07-25 10:55:52.567473] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:23.044 [2024-07-25 10:55:52.591582] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:23.044 10:55:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.044 10:55:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:17:23.044 10:55:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:23.044 10:55:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:17:23.044 10:55:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:17:23.044 10:55:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:17:23.044 10:55:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79918 00:17:23.044 10:55:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79918 /var/tmp/bperf.sock 00:17:23.044 10:55:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 79918 ']' 00:17:23.044 10:55:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:17:23.044 10:55:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:23.044 10:55:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:23.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:23.044 10:55:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:23.044 10:55:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:23.044 10:55:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:23.044 [2024-07-25 10:55:52.655449] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:23.044 [2024-07-25 10:55:52.655585] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79918 ] 00:17:23.302 [2024-07-25 10:55:52.796171] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.302 [2024-07-25 10:55:52.952291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:23.302 [2024-07-25 10:55:53.030373] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:24.238 10:55:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:24.238 10:55:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:17:24.238 10:55:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:24.238 10:55:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:24.238 10:55:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:24.238 10:55:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.238 10:55:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:24.238 10:55:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.238 10:55:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:24.238 10:55:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:24.497 nvme0n1 00:17:24.497 10:55:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:24.497 10:55:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.497 10:55:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:24.497 10:55:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.497 10:55:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:24.497 10:55:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:24.756 Running I/O for 2 seconds... 00:17:24.756 [2024-07-25 10:55:54.342667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:24.756 [2024-07-25 10:55:54.342736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.756 [2024-07-25 10:55:54.342751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.756 [2024-07-25 10:55:54.358755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:24.756 [2024-07-25 10:55:54.358789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.756 [2024-07-25 10:55:54.358802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.756 [2024-07-25 10:55:54.374914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:24.756 [2024-07-25 10:55:54.374947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.756 [2024-07-25 10:55:54.374959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.756 [2024-07-25 10:55:54.391276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:24.756 [2024-07-25 10:55:54.391310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.756 [2024-07-25 10:55:54.391323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.756 [2024-07-25 10:55:54.407869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:24.756 [2024-07-25 10:55:54.407906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.756 [2024-07-25 10:55:54.407919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.756 [2024-07-25 10:55:54.424015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:24.756 [2024-07-25 10:55:54.424046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.756 [2024-07-25 10:55:54.424059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.756 [2024-07-25 10:55:54.439969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:24.756 [2024-07-25 10:55:54.440001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.756 [2024-07-25 10:55:54.440014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.756 [2024-07-25 10:55:54.456020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:24.756 [2024-07-25 10:55:54.456051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.756 [2024-07-25 10:55:54.456064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.756 [2024-07-25 10:55:54.471936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:24.756 [2024-07-25 10:55:54.471971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.756 [2024-07-25 10:55:54.471983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.756 [2024-07-25 10:55:54.487965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:24.756 [2024-07-25 10:55:54.487996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.756 [2024-07-25 10:55:54.488009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.016 [2024-07-25 10:55:54.505126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.016 [2024-07-25 10:55:54.505160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.016 [2024-07-25 10:55:54.505173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.016 [2024-07-25 10:55:54.521890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.016 [2024-07-25 10:55:54.521922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.017 [2024-07-25 10:55:54.521935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.017 [2024-07-25 10:55:54.538268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.017 [2024-07-25 10:55:54.538301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.017 [2024-07-25 10:55:54.538315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.017 [2024-07-25 10:55:54.554561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.017 [2024-07-25 10:55:54.554595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.017 [2024-07-25 10:55:54.554607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.017 [2024-07-25 10:55:54.571502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.017 [2024-07-25 10:55:54.571543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.017 [2024-07-25 10:55:54.571556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.017 [2024-07-25 10:55:54.588511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.017 [2024-07-25 10:55:54.588547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.017 [2024-07-25 10:55:54.588560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.017 [2024-07-25 10:55:54.605442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.017 [2024-07-25 10:55:54.605488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.017 [2024-07-25 10:55:54.605503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.017 [2024-07-25 10:55:54.622339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.017 [2024-07-25 10:55:54.622386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.017 [2024-07-25 10:55:54.622399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.017 [2024-07-25 10:55:54.638767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.017 [2024-07-25 10:55:54.638802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.017 [2024-07-25 10:55:54.638822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.017 [2024-07-25 10:55:54.655539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.017 [2024-07-25 10:55:54.655583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.017 [2024-07-25 10:55:54.655595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.017 [2024-07-25 10:55:54.671584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.017 [2024-07-25 10:55:54.671617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.017 [2024-07-25 10:55:54.671629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.017 [2024-07-25 10:55:54.688048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.017 [2024-07-25 10:55:54.688081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.017 [2024-07-25 10:55:54.688095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.017 [2024-07-25 10:55:54.704311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.017 [2024-07-25 10:55:54.704344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.017 [2024-07-25 10:55:54.704356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.017 [2024-07-25 10:55:54.720511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.017 [2024-07-25 10:55:54.720557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.017 [2024-07-25 10:55:54.720569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.017 [2024-07-25 10:55:54.736823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.017 [2024-07-25 10:55:54.736863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.017 [2024-07-25 10:55:54.736886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.017 [2024-07-25 10:55:54.753229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.017 [2024-07-25 10:55:54.753260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.017 [2024-07-25 10:55:54.753272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.277 [2024-07-25 10:55:54.769716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.277 [2024-07-25 10:55:54.769747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.277 [2024-07-25 10:55:54.769760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.277 [2024-07-25 10:55:54.785957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.277 [2024-07-25 10:55:54.785987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.277 [2024-07-25 10:55:54.786000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.277 [2024-07-25 10:55:54.802386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.277 [2024-07-25 10:55:54.802455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.277 [2024-07-25 10:55:54.802467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.277 [2024-07-25 10:55:54.818800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.277 [2024-07-25 10:55:54.818832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.277 [2024-07-25 10:55:54.818844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.277 [2024-07-25 10:55:54.835068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.277 [2024-07-25 10:55:54.835099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.277 [2024-07-25 10:55:54.835112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.277 [2024-07-25 10:55:54.851954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.277 [2024-07-25 10:55:54.851986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.277 [2024-07-25 10:55:54.851999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.277 [2024-07-25 10:55:54.868569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.277 [2024-07-25 10:55:54.868600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.277 [2024-07-25 10:55:54.868613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.277 [2024-07-25 10:55:54.885128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.277 [2024-07-25 10:55:54.885159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.277 [2024-07-25 10:55:54.885171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.277 [2024-07-25 10:55:54.902471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.277 [2024-07-25 10:55:54.902504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.277 [2024-07-25 10:55:54.902518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.277 [2024-07-25 10:55:54.919335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.277 [2024-07-25 10:55:54.919366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.277 [2024-07-25 10:55:54.919380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.277 [2024-07-25 10:55:54.935858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.277 [2024-07-25 10:55:54.935888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.277 [2024-07-25 10:55:54.935900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.277 [2024-07-25 10:55:54.952184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.277 [2024-07-25 10:55:54.952218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.277 [2024-07-25 10:55:54.952232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.277 [2024-07-25 10:55:54.968289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.277 [2024-07-25 10:55:54.968319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.277 [2024-07-25 10:55:54.968332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.277 [2024-07-25 10:55:54.984515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.277 [2024-07-25 10:55:54.984548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.277 [2024-07-25 10:55:54.984560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.277 [2024-07-25 10:55:55.000841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.277 [2024-07-25 10:55:55.000892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.277 [2024-07-25 10:55:55.000905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.536 [2024-07-25 10:55:55.017880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.536 [2024-07-25 10:55:55.017923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.536 [2024-07-25 10:55:55.017936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.536 [2024-07-25 10:55:55.034125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.536 [2024-07-25 10:55:55.034158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.536 [2024-07-25 10:55:55.034171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.536 [2024-07-25 10:55:55.050624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.536 [2024-07-25 10:55:55.050655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.537 [2024-07-25 10:55:55.050667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.537 [2024-07-25 10:55:55.066832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.537 [2024-07-25 10:55:55.066872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.537 [2024-07-25 10:55:55.066887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.537 [2024-07-25 10:55:55.083119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.537 [2024-07-25 10:55:55.083150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.537 [2024-07-25 10:55:55.083162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.537 [2024-07-25 10:55:55.099313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.537 [2024-07-25 10:55:55.099344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.537 [2024-07-25 10:55:55.099356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.537 [2024-07-25 10:55:55.116149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.537 [2024-07-25 10:55:55.116210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.537 [2024-07-25 10:55:55.116222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.537 [2024-07-25 10:55:55.132497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.537 [2024-07-25 10:55:55.132527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.537 [2024-07-25 10:55:55.132539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.537 [2024-07-25 10:55:55.148624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.537 [2024-07-25 10:55:55.148684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.537 [2024-07-25 10:55:55.148699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.537 [2024-07-25 10:55:55.165366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.537 [2024-07-25 10:55:55.165402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.537 [2024-07-25 10:55:55.165415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.537 [2024-07-25 10:55:55.182317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.537 [2024-07-25 10:55:55.182351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.537 [2024-07-25 10:55:55.182388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.537 [2024-07-25 10:55:55.198816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.537 [2024-07-25 10:55:55.198871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.537 [2024-07-25 10:55:55.198890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.537 [2024-07-25 10:55:55.215384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.537 [2024-07-25 10:55:55.215416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.537 [2024-07-25 10:55:55.215428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.537 [2024-07-25 10:55:55.231574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.537 [2024-07-25 10:55:55.231607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.537 [2024-07-25 10:55:55.231619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.537 [2024-07-25 10:55:55.247771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.537 [2024-07-25 10:55:55.247802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.537 [2024-07-25 10:55:55.247814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.537 [2024-07-25 10:55:55.264421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.537 [2024-07-25 10:55:55.264468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.537 [2024-07-25 10:55:55.264480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.796 [2024-07-25 10:55:55.281330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.796 [2024-07-25 10:55:55.281361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.796 [2024-07-25 10:55:55.281373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.796 [2024-07-25 10:55:55.297403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.796 [2024-07-25 10:55:55.297434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.796 [2024-07-25 10:55:55.297447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.796 [2024-07-25 10:55:55.313975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.796 [2024-07-25 10:55:55.314006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.796 [2024-07-25 10:55:55.314018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.796 [2024-07-25 10:55:55.330537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.796 [2024-07-25 10:55:55.330569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.796 [2024-07-25 10:55:55.330582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.796 [2024-07-25 10:55:55.347409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.796 [2024-07-25 10:55:55.347442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.796 [2024-07-25 10:55:55.347455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.796 [2024-07-25 10:55:55.364246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.796 [2024-07-25 10:55:55.364278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.796 [2024-07-25 10:55:55.364291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.796 [2024-07-25 10:55:55.388141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.796 [2024-07-25 10:55:55.388172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.796 [2024-07-25 10:55:55.388185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.796 [2024-07-25 10:55:55.404769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.796 [2024-07-25 10:55:55.404819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:10701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.796 [2024-07-25 10:55:55.404832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.796 [2024-07-25 10:55:55.421634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.796 [2024-07-25 10:55:55.421669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.796 [2024-07-25 10:55:55.421682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.796 [2024-07-25 10:55:55.438579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.796 [2024-07-25 10:55:55.438611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.796 [2024-07-25 10:55:55.438624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.796 [2024-07-25 10:55:55.464824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.796 [2024-07-25 10:55:55.464870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.796 [2024-07-25 10:55:55.464884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.796 [2024-07-25 10:55:55.484536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.796 [2024-07-25 10:55:55.484568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.796 [2024-07-25 10:55:55.484580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.796 [2024-07-25 10:55:55.504007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.796 [2024-07-25 10:55:55.504038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.796 [2024-07-25 10:55:55.504050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.796 [2024-07-25 10:55:55.523492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:25.796 [2024-07-25 10:55:55.523524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.796 [2024-07-25 10:55:55.523536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.055 [2024-07-25 10:55:55.543142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:26.055 [2024-07-25 10:55:55.543173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.055 [2024-07-25 10:55:55.543185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.055 [2024-07-25 10:55:55.562645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:26.055 [2024-07-25 10:55:55.562676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.055 [2024-07-25 10:55:55.562688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.055 [2024-07-25 10:55:55.581875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:26.055 [2024-07-25 10:55:55.581908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.055 [2024-07-25 10:55:55.581921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.055 [2024-07-25 10:55:55.600017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:26.055 [2024-07-25 10:55:55.600049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.055 [2024-07-25 10:55:55.600061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.055 [2024-07-25 10:55:55.617145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:26.055 [2024-07-25 10:55:55.617177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.055 [2024-07-25 10:55:55.617189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.055 [2024-07-25 10:55:55.634541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:26.055 [2024-07-25 10:55:55.634573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.055 [2024-07-25 10:55:55.634585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.055 [2024-07-25 10:55:55.651341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:26.055 [2024-07-25 10:55:55.651374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.055 [2024-07-25 10:55:55.651386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.055 [2024-07-25 10:55:55.668476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:26.055 [2024-07-25 10:55:55.668508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.055 [2024-07-25 10:55:55.668519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.055 [2024-07-25 10:55:55.685905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:26.055 [2024-07-25 10:55:55.685940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.055 [2024-07-25 10:55:55.685953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.055 [2024-07-25 10:55:55.703265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:26.055 [2024-07-25 10:55:55.703298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.055 [2024-07-25 10:55:55.703311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.055 [2024-07-25 10:55:55.720296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:26.055 [2024-07-25 10:55:55.720328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.055 [2024-07-25 10:55:55.720340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.055 [2024-07-25 10:55:55.737038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:26.055 [2024-07-25 10:55:55.737078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.055 [2024-07-25 10:55:55.737090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.055 [2024-07-25 10:55:55.754068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:26.055 [2024-07-25 10:55:55.754100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.055 [2024-07-25 10:55:55.754113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.055 [2024-07-25 10:55:55.771119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:26.055 [2024-07-25 10:55:55.771149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.055 [2024-07-25 10:55:55.771161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.056 [2024-07-25 10:55:55.788257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:26.056 [2024-07-25 10:55:55.788288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.056 [2024-07-25 10:55:55.788300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.315 [2024-07-25 10:55:55.805370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:26.315 [2024-07-25 10:55:55.805401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.315 [2024-07-25 10:55:55.805420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.315 [2024-07-25 10:55:55.822444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:26.315 [2024-07-25 10:55:55.822490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.315 [2024-07-25 10:55:55.822501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.315 [2024-07-25 10:55:55.839083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:26.315 [2024-07-25 10:55:55.839114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.315 [2024-07-25 10:55:55.839126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.315 [2024-07-25 10:55:55.855797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:26.315 [2024-07-25 10:55:55.855827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.315 [2024-07-25 10:55:55.855838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.315 [2024-07-25 10:55:55.872572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:26.315 [2024-07-25 10:55:55.872603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.315 [2024-07-25 10:55:55.872614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.315 [2024-07-25 10:55:55.889216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:26.315 [2024-07-25 10:55:55.889246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.315 [2024-07-25 10:55:55.889258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.315 [2024-07-25 10:55:55.906189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:26.315 [2024-07-25 10:55:55.906219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.315 [2024-07-25 10:55:55.906231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.315 [2024-07-25 10:55:55.923185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:26.315 [2024-07-25 10:55:55.923222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.315 [2024-07-25 10:55:55.923234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.315 [2024-07-25 10:55:55.940419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:26.315 [2024-07-25 10:55:55.940459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.315 [2024-07-25 10:55:55.940471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.315 [2024-07-25 10:55:55.957810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:26.315 [2024-07-25 10:55:55.957843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.315 [2024-07-25 10:55:55.957865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.315 [2024-07-25 10:55:55.975235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:26.315 [2024-07-25 10:55:55.975268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.315 [2024-07-25 10:55:55.975280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.315 [2024-07-25 10:55:55.992403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:26.315 [2024-07-25 10:55:55.992454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.315 [2024-07-25 10:55:55.992467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.315 [2024-07-25 10:55:56.010298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:26.316 [2024-07-25 10:55:56.010347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.316 [2024-07-25 10:55:56.010374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.316 [2024-07-25 10:55:56.027530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:26.316 [2024-07-25 10:55:56.027567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.316 [2024-07-25 10:55:56.027591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.316 [2024-07-25 10:55:56.044983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:26.316 [2024-07-25 10:55:56.045017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.316 [2024-07-25 10:55:56.045031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.574 [2024-07-25 10:55:56.062683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:26.574 [2024-07-25 10:55:56.062723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.574 [2024-07-25 10:55:56.062736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.574 [2024-07-25 10:55:56.079783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:26.574 [2024-07-25 10:55:56.079816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.574 [2024-07-25 10:55:56.079829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.574 [2024-07-25 10:55:56.097126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:26.574 [2024-07-25 10:55:56.097162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.574 [2024-07-25 10:55:56.097175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.574 [2024-07-25 10:55:56.114020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:26.574 [2024-07-25 10:55:56.114079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.574 [2024-07-25 10:55:56.114092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.574 [2024-07-25 10:55:56.130906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:26.574 [2024-07-25 10:55:56.130938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.574 [2024-07-25 10:55:56.130950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.574 [2024-07-25 10:55:56.147169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:26.574 [2024-07-25 10:55:56.147199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.574 [2024-07-25 10:55:56.147211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.574 [2024-07-25 10:55:56.163621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:26.574 [2024-07-25 10:55:56.163651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.574 [2024-07-25 10:55:56.163663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.574 [2024-07-25 10:55:56.180160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:26.574 [2024-07-25 10:55:56.180190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.574 [2024-07-25 10:55:56.180202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.574 [2024-07-25 10:55:56.196582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:26.574 [2024-07-25 10:55:56.196617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.574 [2024-07-25 10:55:56.196646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.574 [2024-07-25 10:55:56.213108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:26.574 [2024-07-25 10:55:56.213140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.574 [2024-07-25 10:55:56.213152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.574 [2024-07-25 10:55:56.231364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:26.574 [2024-07-25 10:55:56.231396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.574 [2024-07-25 10:55:56.231408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.574 [2024-07-25 10:55:56.248040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:26.574 [2024-07-25 10:55:56.248069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.574 [2024-07-25 10:55:56.248081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.574 [2024-07-25 10:55:56.265615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:26.574 [2024-07-25 10:55:56.265646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.574 [2024-07-25 10:55:56.265658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.574 [2024-07-25 10:55:56.282379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:26.574 [2024-07-25 10:55:56.282410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.574 [2024-07-25 10:55:56.282422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.574 [2024-07-25 10:55:56.299017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:26.574 [2024-07-25 10:55:56.299049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:18209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.575 [2024-07-25 10:55:56.299061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.833 [2024-07-25 10:55:56.315858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2f4f0) 00:17:26.833 [2024-07-25 10:55:56.315903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.833 [2024-07-25 10:55:56.315924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.833 00:17:26.833 Latency(us) 00:17:26.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.833 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:26.833 nvme0n1 : 2.00 14899.23 58.20 0.00 0.00 8583.85 3395.96 32410.53 00:17:26.833 =================================================================================================================== 00:17:26.833 Total : 14899.23 58.20 0.00 0.00 8583.85 3395.96 32410.53 00:17:26.833 0 00:17:26.833 10:55:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:26.833 10:55:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:26.833 10:55:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:26.833 10:55:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:26.833 | .driver_specific 00:17:26.833 | .nvme_error 00:17:26.833 | .status_code 00:17:26.833 | .command_transient_transport_error' 00:17:27.092 10:55:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 117 > 0 )) 00:17:27.092 10:55:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79918 00:17:27.092 10:55:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 79918 ']' 00:17:27.092 10:55:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 79918 00:17:27.092 10:55:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:17:27.092 10:55:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:27.092 10:55:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79918 00:17:27.092 killing process with pid 79918 00:17:27.092 Received shutdown signal, test time was about 2.000000 seconds 00:17:27.092 00:17:27.092 Latency(us) 00:17:27.092 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:27.092 =================================================================================================================== 00:17:27.092 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:27.092 10:55:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:27.092 10:55:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:27.092 10:55:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79918' 00:17:27.092 10:55:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 79918 00:17:27.092 10:55:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 79918 00:17:27.351 10:55:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:17:27.351 10:55:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:27.351 10:55:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:17:27.351 10:55:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:17:27.351 10:55:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:17:27.351 10:55:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79978 00:17:27.351 10:55:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79978 /var/tmp/bperf.sock 00:17:27.351 10:55:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 79978 ']' 00:17:27.351 10:55:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:17:27.351 10:55:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:27.351 10:55:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:27.351 10:55:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:27.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:27.351 10:55:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:27.351 10:55:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:27.351 [2024-07-25 10:55:57.028613] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:27.351 [2024-07-25 10:55:57.029091] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79978 ] 00:17:27.351 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:27.351 Zero copy mechanism will not be used. 00:17:27.610 [2024-07-25 10:55:57.167469] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.610 [2024-07-25 10:55:57.316019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:27.869 [2024-07-25 10:55:57.390408] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:28.435 10:55:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:28.435 10:55:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:17:28.435 10:55:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:28.435 10:55:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:28.693 10:55:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:28.693 10:55:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.693 10:55:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:28.693 10:55:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.693 10:55:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:28.693 10:55:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:28.951 nvme0n1 00:17:28.951 10:55:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:28.951 10:55:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.951 10:55:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:28.951 10:55:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.951 10:55:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:28.951 10:55:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:29.210 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:29.210 Zero copy mechanism will not be used. 00:17:29.210 Running I/O for 2 seconds... 00:17:29.210 [2024-07-25 10:55:58.757813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.210 [2024-07-25 10:55:58.757931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.210 [2024-07-25 10:55:58.757948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.210 [2024-07-25 10:55:58.762830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.210 [2024-07-25 10:55:58.762878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.210 [2024-07-25 10:55:58.762894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.210 [2024-07-25 10:55:58.767654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.210 [2024-07-25 10:55:58.767693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.210 [2024-07-25 10:55:58.767707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.210 [2024-07-25 10:55:58.772938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.210 [2024-07-25 10:55:58.772976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.210 [2024-07-25 10:55:58.772990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.210 [2024-07-25 10:55:58.777978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.210 [2024-07-25 10:55:58.778014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.210 [2024-07-25 10:55:58.778053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.210 [2024-07-25 10:55:58.783063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.210 [2024-07-25 10:55:58.783101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.210 [2024-07-25 10:55:58.783116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.210 [2024-07-25 10:55:58.788105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.211 [2024-07-25 10:55:58.788164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.211 [2024-07-25 10:55:58.788178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.211 [2024-07-25 10:55:58.793180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.211 [2024-07-25 10:55:58.793219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.211 [2024-07-25 10:55:58.793232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.211 [2024-07-25 10:55:58.798108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.211 [2024-07-25 10:55:58.798146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.211 [2024-07-25 10:55:58.798160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.211 [2024-07-25 10:55:58.803280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.211 [2024-07-25 10:55:58.803317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.211 [2024-07-25 10:55:58.803332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.211 [2024-07-25 10:55:58.808246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.211 [2024-07-25 10:55:58.808283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.211 [2024-07-25 10:55:58.808297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.211 [2024-07-25 10:55:58.813185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.211 [2024-07-25 10:55:58.813224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.211 [2024-07-25 10:55:58.813239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.211 [2024-07-25 10:55:58.817975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.211 [2024-07-25 10:55:58.818012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.211 [2024-07-25 10:55:58.818035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.211 [2024-07-25 10:55:58.822805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.211 [2024-07-25 10:55:58.822842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.211 [2024-07-25 10:55:58.822873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.211 [2024-07-25 10:55:58.827642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.211 [2024-07-25 10:55:58.827680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.211 [2024-07-25 10:55:58.827694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.211 [2024-07-25 10:55:58.832567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.211 [2024-07-25 10:55:58.832604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.211 [2024-07-25 10:55:58.832618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.211 [2024-07-25 10:55:58.837439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.211 [2024-07-25 10:55:58.837478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.211 [2024-07-25 10:55:58.837492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.211 [2024-07-25 10:55:58.842228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.211 [2024-07-25 10:55:58.842265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.211 [2024-07-25 10:55:58.842280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.211 [2024-07-25 10:55:58.847015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.211 [2024-07-25 10:55:58.847052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.211 [2024-07-25 10:55:58.847065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.211 [2024-07-25 10:55:58.851855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.211 [2024-07-25 10:55:58.851903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.211 [2024-07-25 10:55:58.851917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.211 [2024-07-25 10:55:58.856786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.211 [2024-07-25 10:55:58.856824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.211 [2024-07-25 10:55:58.856839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.211 [2024-07-25 10:55:58.861689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.211 [2024-07-25 10:55:58.861727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.211 [2024-07-25 10:55:58.861740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.211 [2024-07-25 10:55:58.866570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.211 [2024-07-25 10:55:58.866608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.211 [2024-07-25 10:55:58.866622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.211 [2024-07-25 10:55:58.871508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.211 [2024-07-25 10:55:58.871545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.211 [2024-07-25 10:55:58.871559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.211 [2024-07-25 10:55:58.876350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.211 [2024-07-25 10:55:58.876388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.211 [2024-07-25 10:55:58.876401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.211 [2024-07-25 10:55:58.881217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.211 [2024-07-25 10:55:58.881255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.211 [2024-07-25 10:55:58.881268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.211 [2024-07-25 10:55:58.886119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.211 [2024-07-25 10:55:58.886157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.211 [2024-07-25 10:55:58.886171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.211 [2024-07-25 10:55:58.891012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.211 [2024-07-25 10:55:58.891048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.211 [2024-07-25 10:55:58.891062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.211 [2024-07-25 10:55:58.895778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.211 [2024-07-25 10:55:58.895815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.211 [2024-07-25 10:55:58.895829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.211 [2024-07-25 10:55:58.900658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.211 [2024-07-25 10:55:58.900696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.211 [2024-07-25 10:55:58.900711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.211 [2024-07-25 10:55:58.905549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.211 [2024-07-25 10:55:58.905587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.211 [2024-07-25 10:55:58.905600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.211 [2024-07-25 10:55:58.910402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.211 [2024-07-25 10:55:58.910440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.211 [2024-07-25 10:55:58.910455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.211 [2024-07-25 10:55:58.915279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.211 [2024-07-25 10:55:58.915316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.212 [2024-07-25 10:55:58.915329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.212 [2024-07-25 10:55:58.920159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.212 [2024-07-25 10:55:58.920197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.212 [2024-07-25 10:55:58.920210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.212 [2024-07-25 10:55:58.925000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.212 [2024-07-25 10:55:58.925036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.212 [2024-07-25 10:55:58.925050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.212 [2024-07-25 10:55:58.929850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.212 [2024-07-25 10:55:58.929900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.212 [2024-07-25 10:55:58.929914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.212 [2024-07-25 10:55:58.934799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.212 [2024-07-25 10:55:58.934837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.212 [2024-07-25 10:55:58.934863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.212 [2024-07-25 10:55:58.939690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.212 [2024-07-25 10:55:58.939729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.212 [2024-07-25 10:55:58.939744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.212 [2024-07-25 10:55:58.944628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.212 [2024-07-25 10:55:58.944667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.212 [2024-07-25 10:55:58.944680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.472 [2024-07-25 10:55:58.949557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.472 [2024-07-25 10:55:58.949594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.472 [2024-07-25 10:55:58.949607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.472 [2024-07-25 10:55:58.954555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.472 [2024-07-25 10:55:58.954594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.472 [2024-07-25 10:55:58.954608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.472 [2024-07-25 10:55:58.959388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.472 [2024-07-25 10:55:58.959426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.472 [2024-07-25 10:55:58.959439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.472 [2024-07-25 10:55:58.964305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.472 [2024-07-25 10:55:58.964342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.472 [2024-07-25 10:55:58.964356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.472 [2024-07-25 10:55:58.969237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.472 [2024-07-25 10:55:58.969276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.472 [2024-07-25 10:55:58.969290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.472 [2024-07-25 10:55:58.974307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.472 [2024-07-25 10:55:58.974349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.472 [2024-07-25 10:55:58.974364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.472 [2024-07-25 10:55:58.979390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.472 [2024-07-25 10:55:58.979427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.472 [2024-07-25 10:55:58.979441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.472 [2024-07-25 10:55:58.984488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.472 [2024-07-25 10:55:58.984527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.472 [2024-07-25 10:55:58.984552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.472 [2024-07-25 10:55:58.989414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.472 [2024-07-25 10:55:58.989455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.472 [2024-07-25 10:55:58.989470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.472 [2024-07-25 10:55:58.994355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.472 [2024-07-25 10:55:58.994396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.472 [2024-07-25 10:55:58.994410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.472 [2024-07-25 10:55:58.999320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.472 [2024-07-25 10:55:58.999357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.472 [2024-07-25 10:55:58.999371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.472 [2024-07-25 10:55:59.004241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.472 [2024-07-25 10:55:59.004277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.472 [2024-07-25 10:55:59.004290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.472 [2024-07-25 10:55:59.009120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.472 [2024-07-25 10:55:59.009158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.472 [2024-07-25 10:55:59.009172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.472 [2024-07-25 10:55:59.014011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.472 [2024-07-25 10:55:59.014073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.472 [2024-07-25 10:55:59.014088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.473 [2024-07-25 10:55:59.018852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.473 [2024-07-25 10:55:59.018901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.473 [2024-07-25 10:55:59.018916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.473 [2024-07-25 10:55:59.023713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.473 [2024-07-25 10:55:59.023751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.473 [2024-07-25 10:55:59.023764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.473 [2024-07-25 10:55:59.028586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.473 [2024-07-25 10:55:59.028623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.473 [2024-07-25 10:55:59.028637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.473 [2024-07-25 10:55:59.033386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.473 [2024-07-25 10:55:59.033423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.473 [2024-07-25 10:55:59.033436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.473 [2024-07-25 10:55:59.038251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.473 [2024-07-25 10:55:59.038289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.473 [2024-07-25 10:55:59.038303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.473 [2024-07-25 10:55:59.043058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.473 [2024-07-25 10:55:59.043095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.473 [2024-07-25 10:55:59.043108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.473 [2024-07-25 10:55:59.047894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.473 [2024-07-25 10:55:59.047930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.473 [2024-07-25 10:55:59.047944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.473 [2024-07-25 10:55:59.052860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.473 [2024-07-25 10:55:59.052938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.473 [2024-07-25 10:55:59.052954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.473 [2024-07-25 10:55:59.057908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.473 [2024-07-25 10:55:59.057943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.473 [2024-07-25 10:55:59.057956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.473 [2024-07-25 10:55:59.062906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.473 [2024-07-25 10:55:59.062953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.473 [2024-07-25 10:55:59.062967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.473 [2024-07-25 10:55:59.067782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.473 [2024-07-25 10:55:59.067820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.473 [2024-07-25 10:55:59.067834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.473 [2024-07-25 10:55:59.072747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.473 [2024-07-25 10:55:59.072783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.473 [2024-07-25 10:55:59.072797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.473 [2024-07-25 10:55:59.077642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.473 [2024-07-25 10:55:59.077689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.473 [2024-07-25 10:55:59.077702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.473 [2024-07-25 10:55:59.083971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.473 [2024-07-25 10:55:59.084009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.473 [2024-07-25 10:55:59.084023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.473 [2024-07-25 10:55:59.088871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.473 [2024-07-25 10:55:59.088906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.473 [2024-07-25 10:55:59.088920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.473 [2024-07-25 10:55:59.093762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.473 [2024-07-25 10:55:59.093799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.473 [2024-07-25 10:55:59.093813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.473 [2024-07-25 10:55:59.098684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.473 [2024-07-25 10:55:59.098721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.473 [2024-07-25 10:55:59.098734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.473 [2024-07-25 10:55:59.103629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.473 [2024-07-25 10:55:59.103669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.473 [2024-07-25 10:55:59.103682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.473 [2024-07-25 10:55:59.108711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.473 [2024-07-25 10:55:59.108749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.473 [2024-07-25 10:55:59.108762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.473 [2024-07-25 10:55:59.113686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.473 [2024-07-25 10:55:59.113724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.473 [2024-07-25 10:55:59.113738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.473 [2024-07-25 10:55:59.118621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.473 [2024-07-25 10:55:59.118660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.473 [2024-07-25 10:55:59.118674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.473 [2024-07-25 10:55:59.123527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.473 [2024-07-25 10:55:59.123564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.473 [2024-07-25 10:55:59.123577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.473 [2024-07-25 10:55:59.128352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.473 [2024-07-25 10:55:59.128389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.473 [2024-07-25 10:55:59.128403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.473 [2024-07-25 10:55:59.133203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.473 [2024-07-25 10:55:59.133240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.473 [2024-07-25 10:55:59.133254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.473 [2024-07-25 10:55:59.138136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.473 [2024-07-25 10:55:59.138174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.473 [2024-07-25 10:55:59.138188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.473 [2024-07-25 10:55:59.142957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.474 [2024-07-25 10:55:59.142992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.474 [2024-07-25 10:55:59.143006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.474 [2024-07-25 10:55:59.147876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.474 [2024-07-25 10:55:59.147911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.474 [2024-07-25 10:55:59.147924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.474 [2024-07-25 10:55:59.152781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.474 [2024-07-25 10:55:59.152829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.474 [2024-07-25 10:55:59.152843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.474 [2024-07-25 10:55:59.157722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.474 [2024-07-25 10:55:59.157760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.474 [2024-07-25 10:55:59.157774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.474 [2024-07-25 10:55:59.162625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.474 [2024-07-25 10:55:59.162662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.474 [2024-07-25 10:55:59.162675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.474 [2024-07-25 10:55:59.167464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.474 [2024-07-25 10:55:59.167501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.474 [2024-07-25 10:55:59.167515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.474 [2024-07-25 10:55:59.172250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.474 [2024-07-25 10:55:59.172298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.474 [2024-07-25 10:55:59.172324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.474 [2024-07-25 10:55:59.177102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.474 [2024-07-25 10:55:59.177139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.474 [2024-07-25 10:55:59.177152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.474 [2024-07-25 10:55:59.181938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.474 [2024-07-25 10:55:59.181984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.474 [2024-07-25 10:55:59.181997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.474 [2024-07-25 10:55:59.186780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.474 [2024-07-25 10:55:59.186817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.474 [2024-07-25 10:55:59.186831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.474 [2024-07-25 10:55:59.191608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.474 [2024-07-25 10:55:59.191645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.474 [2024-07-25 10:55:59.191659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.474 [2024-07-25 10:55:59.196524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.474 [2024-07-25 10:55:59.196560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.474 [2024-07-25 10:55:59.196574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.474 [2024-07-25 10:55:59.201573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.474 [2024-07-25 10:55:59.201610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.474 [2024-07-25 10:55:59.201624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.474 [2024-07-25 10:55:59.206888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.474 [2024-07-25 10:55:59.206954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.474 [2024-07-25 10:55:59.206969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.734 [2024-07-25 10:55:59.212152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.734 [2024-07-25 10:55:59.212190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.734 [2024-07-25 10:55:59.212220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.734 [2024-07-25 10:55:59.217102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.734 [2024-07-25 10:55:59.217141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.734 [2024-07-25 10:55:59.217155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.734 [2024-07-25 10:55:59.221984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.734 [2024-07-25 10:55:59.222062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.734 [2024-07-25 10:55:59.222089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.734 [2024-07-25 10:55:59.226867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.734 [2024-07-25 10:55:59.226919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.734 [2024-07-25 10:55:59.226933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.734 [2024-07-25 10:55:59.231775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.734 [2024-07-25 10:55:59.231813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.734 [2024-07-25 10:55:59.231826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.734 [2024-07-25 10:55:59.236664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.734 [2024-07-25 10:55:59.236701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.734 [2024-07-25 10:55:59.236715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.734 [2024-07-25 10:55:59.241501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.734 [2024-07-25 10:55:59.241539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.734 [2024-07-25 10:55:59.241553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.734 [2024-07-25 10:55:59.246451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.734 [2024-07-25 10:55:59.246489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.734 [2024-07-25 10:55:59.246518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.734 [2024-07-25 10:55:59.251400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.734 [2024-07-25 10:55:59.251437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.734 [2024-07-25 10:55:59.251451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.734 [2024-07-25 10:55:59.256485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.734 [2024-07-25 10:55:59.256522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.734 [2024-07-25 10:55:59.256535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.734 [2024-07-25 10:55:59.261447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.734 [2024-07-25 10:55:59.261484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.735 [2024-07-25 10:55:59.261498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.735 [2024-07-25 10:55:59.266343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.735 [2024-07-25 10:55:59.266392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.735 [2024-07-25 10:55:59.266407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.735 [2024-07-25 10:55:59.271204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.735 [2024-07-25 10:55:59.271241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.735 [2024-07-25 10:55:59.271254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.735 [2024-07-25 10:55:59.276059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.735 [2024-07-25 10:55:59.276124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.735 [2024-07-25 10:55:59.276137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.735 [2024-07-25 10:55:59.280971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.735 [2024-07-25 10:55:59.281008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.735 [2024-07-25 10:55:59.281021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.735 [2024-07-25 10:55:59.285816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.735 [2024-07-25 10:55:59.285867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.735 [2024-07-25 10:55:59.285883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.735 [2024-07-25 10:55:59.290651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.735 [2024-07-25 10:55:59.290689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.735 [2024-07-25 10:55:59.290703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.735 [2024-07-25 10:55:59.295515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.735 [2024-07-25 10:55:59.295551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.735 [2024-07-25 10:55:59.295564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.735 [2024-07-25 10:55:59.300386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.735 [2024-07-25 10:55:59.300440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.735 [2024-07-25 10:55:59.300454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.735 [2024-07-25 10:55:59.305284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.735 [2024-07-25 10:55:59.305320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.735 [2024-07-25 10:55:59.305334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.735 [2024-07-25 10:55:59.310240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.735 [2024-07-25 10:55:59.310278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.735 [2024-07-25 10:55:59.310292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.735 [2024-07-25 10:55:59.315094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.735 [2024-07-25 10:55:59.315130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.735 [2024-07-25 10:55:59.315144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.735 [2024-07-25 10:55:59.319888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.735 [2024-07-25 10:55:59.319923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.735 [2024-07-25 10:55:59.319937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.735 [2024-07-25 10:55:59.324749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.735 [2024-07-25 10:55:59.324786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.735 [2024-07-25 10:55:59.324799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.735 [2024-07-25 10:55:59.329674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.735 [2024-07-25 10:55:59.329712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.735 [2024-07-25 10:55:59.329726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.735 [2024-07-25 10:55:59.334609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.735 [2024-07-25 10:55:59.334647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.735 [2024-07-25 10:55:59.334660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.735 [2024-07-25 10:55:59.339499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.735 [2024-07-25 10:55:59.339536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.735 [2024-07-25 10:55:59.339549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.735 [2024-07-25 10:55:59.344342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.735 [2024-07-25 10:55:59.344379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.735 [2024-07-25 10:55:59.344394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.736 [2024-07-25 10:55:59.349218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.736 [2024-07-25 10:55:59.349255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.736 [2024-07-25 10:55:59.349270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.736 [2024-07-25 10:55:59.354233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.736 [2024-07-25 10:55:59.354271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.736 [2024-07-25 10:55:59.354285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.736 [2024-07-25 10:55:59.359160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.736 [2024-07-25 10:55:59.359197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.736 [2024-07-25 10:55:59.359211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.736 [2024-07-25 10:55:59.363988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.736 [2024-07-25 10:55:59.364024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.736 [2024-07-25 10:55:59.364038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.736 [2024-07-25 10:55:59.369038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.736 [2024-07-25 10:55:59.369089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.736 [2024-07-25 10:55:59.369104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.736 [2024-07-25 10:55:59.374170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.736 [2024-07-25 10:55:59.374209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.736 [2024-07-25 10:55:59.374224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.736 [2024-07-25 10:55:59.379139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.736 [2024-07-25 10:55:59.379176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.736 [2024-07-25 10:55:59.379190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.736 [2024-07-25 10:55:59.384033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.736 [2024-07-25 10:55:59.384068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.736 [2024-07-25 10:55:59.384082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.736 [2024-07-25 10:55:59.388964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.736 [2024-07-25 10:55:59.388999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.736 [2024-07-25 10:55:59.389012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.736 [2024-07-25 10:55:59.393755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.736 [2024-07-25 10:55:59.393792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.736 [2024-07-25 10:55:59.393806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.736 [2024-07-25 10:55:59.398671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.736 [2024-07-25 10:55:59.398708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.736 [2024-07-25 10:55:59.398722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.736 [2024-07-25 10:55:59.403548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.736 [2024-07-25 10:55:59.403584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.736 [2024-07-25 10:55:59.403598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.736 [2024-07-25 10:55:59.408429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.736 [2024-07-25 10:55:59.408466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.736 [2024-07-25 10:55:59.408480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.736 [2024-07-25 10:55:59.413388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.736 [2024-07-25 10:55:59.413424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.736 [2024-07-25 10:55:59.413438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.736 [2024-07-25 10:55:59.418262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.736 [2024-07-25 10:55:59.418300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.736 [2024-07-25 10:55:59.418315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.736 [2024-07-25 10:55:59.423068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.736 [2024-07-25 10:55:59.423103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.736 [2024-07-25 10:55:59.423116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.736 [2024-07-25 10:55:59.427958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.736 [2024-07-25 10:55:59.428023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.736 [2024-07-25 10:55:59.428037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.736 [2024-07-25 10:55:59.432751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.736 [2024-07-25 10:55:59.432788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.736 [2024-07-25 10:55:59.432802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.737 [2024-07-25 10:55:59.437834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.737 [2024-07-25 10:55:59.437899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.737 [2024-07-25 10:55:59.437913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.737 [2024-07-25 10:55:59.442732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.737 [2024-07-25 10:55:59.442772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.737 [2024-07-25 10:55:59.442787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.737 [2024-07-25 10:55:59.447542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.737 [2024-07-25 10:55:59.447580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.737 [2024-07-25 10:55:59.447594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.737 [2024-07-25 10:55:59.452538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.737 [2024-07-25 10:55:59.452577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.737 [2024-07-25 10:55:59.452592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.737 [2024-07-25 10:55:59.457421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.737 [2024-07-25 10:55:59.457459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.737 [2024-07-25 10:55:59.457473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.737 [2024-07-25 10:55:59.462324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.737 [2024-07-25 10:55:59.462362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.737 [2024-07-25 10:55:59.462387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.737 [2024-07-25 10:55:59.467208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.737 [2024-07-25 10:55:59.467260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.737 [2024-07-25 10:55:59.467273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.997 [2024-07-25 10:55:59.472186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.997 [2024-07-25 10:55:59.472223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.997 [2024-07-25 10:55:59.472237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.997 [2024-07-25 10:55:59.477080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.997 [2024-07-25 10:55:59.477116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.997 [2024-07-25 10:55:59.477130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.997 [2024-07-25 10:55:59.481838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.997 [2024-07-25 10:55:59.481885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.997 [2024-07-25 10:55:59.481912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.997 [2024-07-25 10:55:59.486676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.997 [2024-07-25 10:55:59.486714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.997 [2024-07-25 10:55:59.486727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.997 [2024-07-25 10:55:59.491541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.997 [2024-07-25 10:55:59.491577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.997 [2024-07-25 10:55:59.491591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.997 [2024-07-25 10:55:59.496409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.997 [2024-07-25 10:55:59.496446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.997 [2024-07-25 10:55:59.496459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.997 [2024-07-25 10:55:59.501302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.997 [2024-07-25 10:55:59.501339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.997 [2024-07-25 10:55:59.501353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.997 [2024-07-25 10:55:59.506210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.997 [2024-07-25 10:55:59.506248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.997 [2024-07-25 10:55:59.506262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.997 [2024-07-25 10:55:59.511118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.997 [2024-07-25 10:55:59.511154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.997 [2024-07-25 10:55:59.511168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.997 [2024-07-25 10:55:59.515974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.997 [2024-07-25 10:55:59.516031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.997 [2024-07-25 10:55:59.516046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.997 [2024-07-25 10:55:59.520918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.997 [2024-07-25 10:55:59.520954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.997 [2024-07-25 10:55:59.520968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.997 [2024-07-25 10:55:59.525859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.997 [2024-07-25 10:55:59.525923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.997 [2024-07-25 10:55:59.525938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.997 [2024-07-25 10:55:59.530864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.997 [2024-07-25 10:55:59.530926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.997 [2024-07-25 10:55:59.530940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.997 [2024-07-25 10:55:59.535697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.997 [2024-07-25 10:55:59.535733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.997 [2024-07-25 10:55:59.535746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.997 [2024-07-25 10:55:59.540630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.997 [2024-07-25 10:55:59.540668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.997 [2024-07-25 10:55:59.540682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.997 [2024-07-25 10:55:59.545416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.997 [2024-07-25 10:55:59.545454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.997 [2024-07-25 10:55:59.545467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.997 [2024-07-25 10:55:59.550264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.997 [2024-07-25 10:55:59.550301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.997 [2024-07-25 10:55:59.550314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.997 [2024-07-25 10:55:59.555139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.997 [2024-07-25 10:55:59.555176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.998 [2024-07-25 10:55:59.555189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.998 [2024-07-25 10:55:59.559818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.998 [2024-07-25 10:55:59.559874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.998 [2024-07-25 10:55:59.559889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.998 [2024-07-25 10:55:59.564511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.998 [2024-07-25 10:55:59.564548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.998 [2024-07-25 10:55:59.564562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.998 [2024-07-25 10:55:59.569409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.998 [2024-07-25 10:55:59.569446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.998 [2024-07-25 10:55:59.569460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.998 [2024-07-25 10:55:59.574272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.998 [2024-07-25 10:55:59.574310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.998 [2024-07-25 10:55:59.574324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.998 [2024-07-25 10:55:59.579130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.998 [2024-07-25 10:55:59.579167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.998 [2024-07-25 10:55:59.579181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.998 [2024-07-25 10:55:59.583925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.998 [2024-07-25 10:55:59.583960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.998 [2024-07-25 10:55:59.583975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.998 [2024-07-25 10:55:59.588741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.998 [2024-07-25 10:55:59.588778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.998 [2024-07-25 10:55:59.588793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.998 [2024-07-25 10:55:59.593589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.998 [2024-07-25 10:55:59.593625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.998 [2024-07-25 10:55:59.593638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.998 [2024-07-25 10:55:59.598493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.998 [2024-07-25 10:55:59.598530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.998 [2024-07-25 10:55:59.598543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.998 [2024-07-25 10:55:59.603473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.998 [2024-07-25 10:55:59.603510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.998 [2024-07-25 10:55:59.603524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.998 [2024-07-25 10:55:59.608550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.998 [2024-07-25 10:55:59.608587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.998 [2024-07-25 10:55:59.608601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.998 [2024-07-25 10:55:59.613487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.998 [2024-07-25 10:55:59.613524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.998 [2024-07-25 10:55:59.613538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.998 [2024-07-25 10:55:59.618354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.998 [2024-07-25 10:55:59.618399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.998 [2024-07-25 10:55:59.618413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.998 [2024-07-25 10:55:59.623311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.998 [2024-07-25 10:55:59.623347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.998 [2024-07-25 10:55:59.623360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.998 [2024-07-25 10:55:59.628315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.998 [2024-07-25 10:55:59.628352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.998 [2024-07-25 10:55:59.628365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.998 [2024-07-25 10:55:59.633384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.998 [2024-07-25 10:55:59.633426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.998 [2024-07-25 10:55:59.633468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.998 [2024-07-25 10:55:59.638599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.998 [2024-07-25 10:55:59.638635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.998 [2024-07-25 10:55:59.638649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.998 [2024-07-25 10:55:59.643578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.998 [2024-07-25 10:55:59.643614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.998 [2024-07-25 10:55:59.643628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.998 [2024-07-25 10:55:59.648602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.998 [2024-07-25 10:55:59.648639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.998 [2024-07-25 10:55:59.648653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.998 [2024-07-25 10:55:59.653718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.998 [2024-07-25 10:55:59.653754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.998 [2024-07-25 10:55:59.653768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.998 [2024-07-25 10:55:59.658604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.998 [2024-07-25 10:55:59.658641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.998 [2024-07-25 10:55:59.658655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.998 [2024-07-25 10:55:59.663521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.998 [2024-07-25 10:55:59.663558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.998 [2024-07-25 10:55:59.663571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.998 [2024-07-25 10:55:59.668366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.998 [2024-07-25 10:55:59.668403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.998 [2024-07-25 10:55:59.668417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.998 [2024-07-25 10:55:59.673350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.998 [2024-07-25 10:55:59.673387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.998 [2024-07-25 10:55:59.673401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.998 [2024-07-25 10:55:59.678227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.998 [2024-07-25 10:55:59.678267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.998 [2024-07-25 10:55:59.678282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.998 [2024-07-25 10:55:59.683212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.998 [2024-07-25 10:55:59.683251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.998 [2024-07-25 10:55:59.683265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.998 [2024-07-25 10:55:59.688337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.999 [2024-07-25 10:55:59.688388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.999 [2024-07-25 10:55:59.688402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.999 [2024-07-25 10:55:59.693207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.999 [2024-07-25 10:55:59.693245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.999 [2024-07-25 10:55:59.693259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.999 [2024-07-25 10:55:59.698141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.999 [2024-07-25 10:55:59.698180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.999 [2024-07-25 10:55:59.698194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.999 [2024-07-25 10:55:59.703063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.999 [2024-07-25 10:55:59.703114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.999 [2024-07-25 10:55:59.703127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.999 [2024-07-25 10:55:59.708024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.999 [2024-07-25 10:55:59.708060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.999 [2024-07-25 10:55:59.708074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.999 [2024-07-25 10:55:59.712955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.999 [2024-07-25 10:55:59.712990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.999 [2024-07-25 10:55:59.713003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.999 [2024-07-25 10:55:59.717848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.999 [2024-07-25 10:55:59.717899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.999 [2024-07-25 10:55:59.717913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.999 [2024-07-25 10:55:59.722832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.999 [2024-07-25 10:55:59.722877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.999 [2024-07-25 10:55:59.722894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.999 [2024-07-25 10:55:59.727807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.999 [2024-07-25 10:55:59.727844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.999 [2024-07-25 10:55:59.727886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.999 [2024-07-25 10:55:59.732791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:29.999 [2024-07-25 10:55:59.732828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.999 [2024-07-25 10:55:59.732843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.259 [2024-07-25 10:55:59.737654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.259 [2024-07-25 10:55:59.737691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.259 [2024-07-25 10:55:59.737705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.259 [2024-07-25 10:55:59.742652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.259 [2024-07-25 10:55:59.742688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.259 [2024-07-25 10:55:59.742703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.259 [2024-07-25 10:55:59.747592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.259 [2024-07-25 10:55:59.747628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.259 [2024-07-25 10:55:59.747642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.259 [2024-07-25 10:55:59.752461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.259 [2024-07-25 10:55:59.752499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.259 [2024-07-25 10:55:59.752513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.259 [2024-07-25 10:55:59.757296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.259 [2024-07-25 10:55:59.757331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.259 [2024-07-25 10:55:59.757345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.259 [2024-07-25 10:55:59.762187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.259 [2024-07-25 10:55:59.762225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.259 [2024-07-25 10:55:59.762239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.259 [2024-07-25 10:55:59.766978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.259 [2024-07-25 10:55:59.767015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.259 [2024-07-25 10:55:59.767028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.259 [2024-07-25 10:55:59.771784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.259 [2024-07-25 10:55:59.771821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.259 [2024-07-25 10:55:59.771835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.259 [2024-07-25 10:55:59.776591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.259 [2024-07-25 10:55:59.776628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.259 [2024-07-25 10:55:59.776642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.259 [2024-07-25 10:55:59.781485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.259 [2024-07-25 10:55:59.781522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.259 [2024-07-25 10:55:59.781536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.259 [2024-07-25 10:55:59.786303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.259 [2024-07-25 10:55:59.786341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.259 [2024-07-25 10:55:59.786354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.259 [2024-07-25 10:55:59.791039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.259 [2024-07-25 10:55:59.791074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.259 [2024-07-25 10:55:59.791089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.259 [2024-07-25 10:55:59.795834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.259 [2024-07-25 10:55:59.795883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.259 [2024-07-25 10:55:59.795897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.259 [2024-07-25 10:55:59.800542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.259 [2024-07-25 10:55:59.800579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.259 [2024-07-25 10:55:59.800593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.259 [2024-07-25 10:55:59.805404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.259 [2024-07-25 10:55:59.805441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.259 [2024-07-25 10:55:59.805455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.259 [2024-07-25 10:55:59.810276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.259 [2024-07-25 10:55:59.810315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.259 [2024-07-25 10:55:59.810330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.259 [2024-07-25 10:55:59.815182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.259 [2024-07-25 10:55:59.815218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.259 [2024-07-25 10:55:59.815232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.259 [2024-07-25 10:55:59.820038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.259 [2024-07-25 10:55:59.820073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.260 [2024-07-25 10:55:59.820086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.260 [2024-07-25 10:55:59.824929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.260 [2024-07-25 10:55:59.824964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.260 [2024-07-25 10:55:59.824977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.260 [2024-07-25 10:55:59.829864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.260 [2024-07-25 10:55:59.829898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.260 [2024-07-25 10:55:59.829911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.260 [2024-07-25 10:55:59.834731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.260 [2024-07-25 10:55:59.834767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.260 [2024-07-25 10:55:59.834781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.260 [2024-07-25 10:55:59.839563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.260 [2024-07-25 10:55:59.839601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.260 [2024-07-25 10:55:59.839615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.260 [2024-07-25 10:55:59.844422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.260 [2024-07-25 10:55:59.844459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.260 [2024-07-25 10:55:59.844472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.260 [2024-07-25 10:55:59.849209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.260 [2024-07-25 10:55:59.849246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.260 [2024-07-25 10:55:59.849259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.260 [2024-07-25 10:55:59.854113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.260 [2024-07-25 10:55:59.854151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.260 [2024-07-25 10:55:59.854165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.260 [2024-07-25 10:55:59.858996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.260 [2024-07-25 10:55:59.859031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.260 [2024-07-25 10:55:59.859055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.260 [2024-07-25 10:55:59.863771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.260 [2024-07-25 10:55:59.863808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.260 [2024-07-25 10:55:59.863821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.260 [2024-07-25 10:55:59.868583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.260 [2024-07-25 10:55:59.868620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.260 [2024-07-25 10:55:59.868634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.260 [2024-07-25 10:55:59.873418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.260 [2024-07-25 10:55:59.873454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.260 [2024-07-25 10:55:59.873468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.260 [2024-07-25 10:55:59.878242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.260 [2024-07-25 10:55:59.878280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.260 [2024-07-25 10:55:59.878295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.260 [2024-07-25 10:55:59.883169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.260 [2024-07-25 10:55:59.883221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.260 [2024-07-25 10:55:59.883235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.260 [2024-07-25 10:55:59.888215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.260 [2024-07-25 10:55:59.888252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.260 [2024-07-25 10:55:59.888266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.260 [2024-07-25 10:55:59.893083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.260 [2024-07-25 10:55:59.893119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.260 [2024-07-25 10:55:59.893133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.260 [2024-07-25 10:55:59.897908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.260 [2024-07-25 10:55:59.897943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.260 [2024-07-25 10:55:59.897956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.260 [2024-07-25 10:55:59.902740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.260 [2024-07-25 10:55:59.902777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.260 [2024-07-25 10:55:59.902791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.260 [2024-07-25 10:55:59.907629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.260 [2024-07-25 10:55:59.907669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.260 [2024-07-25 10:55:59.907683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.260 [2024-07-25 10:55:59.912446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.260 [2024-07-25 10:55:59.912483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.260 [2024-07-25 10:55:59.912497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.260 [2024-07-25 10:55:59.917354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.260 [2024-07-25 10:55:59.917391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.260 [2024-07-25 10:55:59.917405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.260 [2024-07-25 10:55:59.922184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.260 [2024-07-25 10:55:59.922222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.260 [2024-07-25 10:55:59.922236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.260 [2024-07-25 10:55:59.927020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.260 [2024-07-25 10:55:59.927072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.260 [2024-07-25 10:55:59.927087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.260 [2024-07-25 10:55:59.931904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.260 [2024-07-25 10:55:59.931957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.260 [2024-07-25 10:55:59.931974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.260 [2024-07-25 10:55:59.936782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.260 [2024-07-25 10:55:59.936820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.260 [2024-07-25 10:55:59.936848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.260 [2024-07-25 10:55:59.941725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.260 [2024-07-25 10:55:59.941762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.260 [2024-07-25 10:55:59.941775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.260 [2024-07-25 10:55:59.946509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.260 [2024-07-25 10:55:59.946547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.260 [2024-07-25 10:55:59.946561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.260 [2024-07-25 10:55:59.951355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.261 [2024-07-25 10:55:59.951392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.261 [2024-07-25 10:55:59.951406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.261 [2024-07-25 10:55:59.956260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.261 [2024-07-25 10:55:59.956296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.261 [2024-07-25 10:55:59.956309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.261 [2024-07-25 10:55:59.961035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.261 [2024-07-25 10:55:59.961070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.261 [2024-07-25 10:55:59.961083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.261 [2024-07-25 10:55:59.965849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.261 [2024-07-25 10:55:59.965896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.261 [2024-07-25 10:55:59.965911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.261 [2024-07-25 10:55:59.970741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.261 [2024-07-25 10:55:59.970778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.261 [2024-07-25 10:55:59.970792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.261 [2024-07-25 10:55:59.975564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.261 [2024-07-25 10:55:59.975600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.261 [2024-07-25 10:55:59.975614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.261 [2024-07-25 10:55:59.980371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.261 [2024-07-25 10:55:59.980408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.261 [2024-07-25 10:55:59.980421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.261 [2024-07-25 10:55:59.985322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.261 [2024-07-25 10:55:59.985358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.261 [2024-07-25 10:55:59.985371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.261 [2024-07-25 10:55:59.990294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.261 [2024-07-25 10:55:59.990332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.261 [2024-07-25 10:55:59.990358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.261 [2024-07-25 10:55:59.995454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.261 [2024-07-25 10:55:59.995493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.261 [2024-07-25 10:55:59.995507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.521 [2024-07-25 10:56:00.000529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.521 [2024-07-25 10:56:00.000584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.521 [2024-07-25 10:56:00.000598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.521 [2024-07-25 10:56:00.005648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.521 [2024-07-25 10:56:00.005686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.521 [2024-07-25 10:56:00.005700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.521 [2024-07-25 10:56:00.010698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.521 [2024-07-25 10:56:00.010737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.521 [2024-07-25 10:56:00.010751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.521 [2024-07-25 10:56:00.015674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.521 [2024-07-25 10:56:00.015713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.521 [2024-07-25 10:56:00.015742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.521 [2024-07-25 10:56:00.020732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.521 [2024-07-25 10:56:00.020770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.521 [2024-07-25 10:56:00.020784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.521 [2024-07-25 10:56:00.025776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.521 [2024-07-25 10:56:00.025815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.521 [2024-07-25 10:56:00.025829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.521 [2024-07-25 10:56:00.030709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.521 [2024-07-25 10:56:00.030747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.521 [2024-07-25 10:56:00.030761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.521 [2024-07-25 10:56:00.035731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.521 [2024-07-25 10:56:00.035783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.521 [2024-07-25 10:56:00.035797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.521 [2024-07-25 10:56:00.040780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.521 [2024-07-25 10:56:00.040817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.521 [2024-07-25 10:56:00.040831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.521 [2024-07-25 10:56:00.045642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.521 [2024-07-25 10:56:00.045679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.521 [2024-07-25 10:56:00.045693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.521 [2024-07-25 10:56:00.050546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.521 [2024-07-25 10:56:00.050597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.521 [2024-07-25 10:56:00.050618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.521 [2024-07-25 10:56:00.055514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.521 [2024-07-25 10:56:00.055550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.521 [2024-07-25 10:56:00.055565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.521 [2024-07-25 10:56:00.060317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.521 [2024-07-25 10:56:00.060354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.521 [2024-07-25 10:56:00.060367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.521 [2024-07-25 10:56:00.065189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.521 [2024-07-25 10:56:00.065225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.521 [2024-07-25 10:56:00.065238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.521 [2024-07-25 10:56:00.070086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.521 [2024-07-25 10:56:00.070122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.521 [2024-07-25 10:56:00.070136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.521 [2024-07-25 10:56:00.074940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.521 [2024-07-25 10:56:00.074973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.521 [2024-07-25 10:56:00.074987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.521 [2024-07-25 10:56:00.079712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.521 [2024-07-25 10:56:00.079746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.521 [2024-07-25 10:56:00.079760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.521 [2024-07-25 10:56:00.084517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.521 [2024-07-25 10:56:00.084552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.521 [2024-07-25 10:56:00.084565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.521 [2024-07-25 10:56:00.089376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.521 [2024-07-25 10:56:00.089411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.521 [2024-07-25 10:56:00.089424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.521 [2024-07-25 10:56:00.094224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.521 [2024-07-25 10:56:00.094260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.521 [2024-07-25 10:56:00.094273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.521 [2024-07-25 10:56:00.099061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.521 [2024-07-25 10:56:00.099095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.521 [2024-07-25 10:56:00.099108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.521 [2024-07-25 10:56:00.103899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.521 [2024-07-25 10:56:00.103934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.521 [2024-07-25 10:56:00.103961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.521 [2024-07-25 10:56:00.108834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.521 [2024-07-25 10:56:00.108879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.521 [2024-07-25 10:56:00.108893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.521 [2024-07-25 10:56:00.113749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.521 [2024-07-25 10:56:00.113783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.521 [2024-07-25 10:56:00.113795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.521 [2024-07-25 10:56:00.118724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.521 [2024-07-25 10:56:00.118759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.521 [2024-07-25 10:56:00.118772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.521 [2024-07-25 10:56:00.123633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.522 [2024-07-25 10:56:00.123667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.522 [2024-07-25 10:56:00.123680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.522 [2024-07-25 10:56:00.128505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.522 [2024-07-25 10:56:00.128539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.522 [2024-07-25 10:56:00.128569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.522 [2024-07-25 10:56:00.133397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.522 [2024-07-25 10:56:00.133432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.522 [2024-07-25 10:56:00.133445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.522 [2024-07-25 10:56:00.138362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.522 [2024-07-25 10:56:00.138415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.522 [2024-07-25 10:56:00.138428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.522 [2024-07-25 10:56:00.143399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.522 [2024-07-25 10:56:00.143434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.522 [2024-07-25 10:56:00.143447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.522 [2024-07-25 10:56:00.148376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.522 [2024-07-25 10:56:00.148413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.522 [2024-07-25 10:56:00.148426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.522 [2024-07-25 10:56:00.153226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.522 [2024-07-25 10:56:00.153262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.522 [2024-07-25 10:56:00.153276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.522 [2024-07-25 10:56:00.158104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.522 [2024-07-25 10:56:00.158141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.522 [2024-07-25 10:56:00.158155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.522 [2024-07-25 10:56:00.163004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.522 [2024-07-25 10:56:00.163039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.522 [2024-07-25 10:56:00.163053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.522 [2024-07-25 10:56:00.167993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.522 [2024-07-25 10:56:00.168028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.522 [2024-07-25 10:56:00.168041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.522 [2024-07-25 10:56:00.172755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.522 [2024-07-25 10:56:00.172790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.522 [2024-07-25 10:56:00.172803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.522 [2024-07-25 10:56:00.177635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.522 [2024-07-25 10:56:00.177670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.522 [2024-07-25 10:56:00.177683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.522 [2024-07-25 10:56:00.182621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.522 [2024-07-25 10:56:00.182657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.522 [2024-07-25 10:56:00.182670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.522 [2024-07-25 10:56:00.187502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.522 [2024-07-25 10:56:00.187537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.522 [2024-07-25 10:56:00.187566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.522 [2024-07-25 10:56:00.192386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.522 [2024-07-25 10:56:00.192421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.522 [2024-07-25 10:56:00.192434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.522 [2024-07-25 10:56:00.197339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.522 [2024-07-25 10:56:00.197391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.522 [2024-07-25 10:56:00.197404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.522 [2024-07-25 10:56:00.202309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.522 [2024-07-25 10:56:00.202361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.522 [2024-07-25 10:56:00.202375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.522 [2024-07-25 10:56:00.207247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.522 [2024-07-25 10:56:00.207282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.522 [2024-07-25 10:56:00.207300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.522 [2024-07-25 10:56:00.212216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.522 [2024-07-25 10:56:00.212252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.522 [2024-07-25 10:56:00.212265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.522 [2024-07-25 10:56:00.217182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.522 [2024-07-25 10:56:00.217217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.522 [2024-07-25 10:56:00.217231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.522 [2024-07-25 10:56:00.222228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.522 [2024-07-25 10:56:00.222264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.522 [2024-07-25 10:56:00.222277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.522 [2024-07-25 10:56:00.227220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.522 [2024-07-25 10:56:00.227254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.522 [2024-07-25 10:56:00.227267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.522 [2024-07-25 10:56:00.232142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.522 [2024-07-25 10:56:00.232176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.522 [2024-07-25 10:56:00.232189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.522 [2024-07-25 10:56:00.237191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.522 [2024-07-25 10:56:00.237227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.522 [2024-07-25 10:56:00.237240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.522 [2024-07-25 10:56:00.242049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.522 [2024-07-25 10:56:00.242100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.522 [2024-07-25 10:56:00.242113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.522 [2024-07-25 10:56:00.246998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.522 [2024-07-25 10:56:00.247033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.522 [2024-07-25 10:56:00.247046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.522 [2024-07-25 10:56:00.251820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.522 [2024-07-25 10:56:00.251866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.523 [2024-07-25 10:56:00.251880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.523 [2024-07-25 10:56:00.256877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.523 [2024-07-25 10:56:00.256924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.523 [2024-07-25 10:56:00.256938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.782 [2024-07-25 10:56:00.261808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.782 [2024-07-25 10:56:00.261843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.782 [2024-07-25 10:56:00.261886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.783 [2024-07-25 10:56:00.266976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.783 [2024-07-25 10:56:00.267010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.783 [2024-07-25 10:56:00.267027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.783 [2024-07-25 10:56:00.271997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.783 [2024-07-25 10:56:00.272046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.783 [2024-07-25 10:56:00.272061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.783 [2024-07-25 10:56:00.276998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.783 [2024-07-25 10:56:00.277032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.783 [2024-07-25 10:56:00.277045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.783 [2024-07-25 10:56:00.282145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.783 [2024-07-25 10:56:00.282182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.783 [2024-07-25 10:56:00.282196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.783 [2024-07-25 10:56:00.287321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.783 [2024-07-25 10:56:00.287368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.783 [2024-07-25 10:56:00.287381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.783 [2024-07-25 10:56:00.292441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.783 [2024-07-25 10:56:00.292475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.783 [2024-07-25 10:56:00.292488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.783 [2024-07-25 10:56:00.297325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.783 [2024-07-25 10:56:00.297361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.783 [2024-07-25 10:56:00.297374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.783 [2024-07-25 10:56:00.302331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.783 [2024-07-25 10:56:00.302378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.783 [2024-07-25 10:56:00.302393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.783 [2024-07-25 10:56:00.307305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.783 [2024-07-25 10:56:00.307339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.783 [2024-07-25 10:56:00.307352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.783 [2024-07-25 10:56:00.312278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.783 [2024-07-25 10:56:00.312326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.783 [2024-07-25 10:56:00.312339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.783 [2024-07-25 10:56:00.317296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.783 [2024-07-25 10:56:00.317344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.783 [2024-07-25 10:56:00.317358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.783 [2024-07-25 10:56:00.322372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.783 [2024-07-25 10:56:00.322421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.783 [2024-07-25 10:56:00.322434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.783 [2024-07-25 10:56:00.327426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.783 [2024-07-25 10:56:00.327474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.783 [2024-07-25 10:56:00.327487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.783 [2024-07-25 10:56:00.332460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.783 [2024-07-25 10:56:00.332496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.783 [2024-07-25 10:56:00.332512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.783 [2024-07-25 10:56:00.337524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.783 [2024-07-25 10:56:00.337576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.783 [2024-07-25 10:56:00.337589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.783 [2024-07-25 10:56:00.342609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.783 [2024-07-25 10:56:00.342644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.783 [2024-07-25 10:56:00.342658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.783 [2024-07-25 10:56:00.347597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.783 [2024-07-25 10:56:00.347632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.783 [2024-07-25 10:56:00.347645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.783 [2024-07-25 10:56:00.352614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.783 [2024-07-25 10:56:00.352653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.783 [2024-07-25 10:56:00.352667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.783 [2024-07-25 10:56:00.357585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.783 [2024-07-25 10:56:00.357620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.783 [2024-07-25 10:56:00.357634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.783 [2024-07-25 10:56:00.362596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.783 [2024-07-25 10:56:00.362631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.783 [2024-07-25 10:56:00.362644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.783 [2024-07-25 10:56:00.367477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.783 [2024-07-25 10:56:00.367511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.783 [2024-07-25 10:56:00.367528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.783 [2024-07-25 10:56:00.372487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.783 [2024-07-25 10:56:00.372522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.783 [2024-07-25 10:56:00.372535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.783 [2024-07-25 10:56:00.377452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.783 [2024-07-25 10:56:00.377486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.783 [2024-07-25 10:56:00.377500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.783 [2024-07-25 10:56:00.382434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.783 [2024-07-25 10:56:00.382469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.783 [2024-07-25 10:56:00.382481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.783 [2024-07-25 10:56:00.387224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.783 [2024-07-25 10:56:00.387260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.783 [2024-07-25 10:56:00.387273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.783 [2024-07-25 10:56:00.392069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.783 [2024-07-25 10:56:00.392103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.783 [2024-07-25 10:56:00.392117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.783 [2024-07-25 10:56:00.396961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.784 [2024-07-25 10:56:00.396994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.784 [2024-07-25 10:56:00.397007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.784 [2024-07-25 10:56:00.401830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.784 [2024-07-25 10:56:00.401878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.784 [2024-07-25 10:56:00.401892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.784 [2024-07-25 10:56:00.406930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.784 [2024-07-25 10:56:00.406964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.784 [2024-07-25 10:56:00.406985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.784 [2024-07-25 10:56:00.411832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.784 [2024-07-25 10:56:00.411877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.784 [2024-07-25 10:56:00.411891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.784 [2024-07-25 10:56:00.416675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.784 [2024-07-25 10:56:00.416710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.784 [2024-07-25 10:56:00.416723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.784 [2024-07-25 10:56:00.421658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.784 [2024-07-25 10:56:00.421694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.784 [2024-07-25 10:56:00.421708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.784 [2024-07-25 10:56:00.426567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.784 [2024-07-25 10:56:00.426616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.784 [2024-07-25 10:56:00.426630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.784 [2024-07-25 10:56:00.431576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.784 [2024-07-25 10:56:00.431626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.784 [2024-07-25 10:56:00.431639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.784 [2024-07-25 10:56:00.436524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.784 [2024-07-25 10:56:00.436566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.784 [2024-07-25 10:56:00.436582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.784 [2024-07-25 10:56:00.441419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.784 [2024-07-25 10:56:00.441455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.784 [2024-07-25 10:56:00.441469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.784 [2024-07-25 10:56:00.446266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.784 [2024-07-25 10:56:00.446303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.784 [2024-07-25 10:56:00.446317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.784 [2024-07-25 10:56:00.451256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.784 [2024-07-25 10:56:00.451300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.784 [2024-07-25 10:56:00.451313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.784 [2024-07-25 10:56:00.456277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.784 [2024-07-25 10:56:00.456312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.784 [2024-07-25 10:56:00.456325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.784 [2024-07-25 10:56:00.461165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.784 [2024-07-25 10:56:00.461199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.784 [2024-07-25 10:56:00.461212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.784 [2024-07-25 10:56:00.466005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.784 [2024-07-25 10:56:00.466064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.784 [2024-07-25 10:56:00.466078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.784 [2024-07-25 10:56:00.470963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.784 [2024-07-25 10:56:00.471000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.784 [2024-07-25 10:56:00.471014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.784 [2024-07-25 10:56:00.476004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.784 [2024-07-25 10:56:00.476038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.784 [2024-07-25 10:56:00.476051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.784 [2024-07-25 10:56:00.480955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.784 [2024-07-25 10:56:00.480989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.784 [2024-07-25 10:56:00.481003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.784 [2024-07-25 10:56:00.485903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.784 [2024-07-25 10:56:00.485948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.784 [2024-07-25 10:56:00.485961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.784 [2024-07-25 10:56:00.490878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.784 [2024-07-25 10:56:00.490927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.784 [2024-07-25 10:56:00.490941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.784 [2024-07-25 10:56:00.495813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.784 [2024-07-25 10:56:00.495864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.784 [2024-07-25 10:56:00.495879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:30.784 [2024-07-25 10:56:00.500827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.784 [2024-07-25 10:56:00.500873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.784 [2024-07-25 10:56:00.500887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:30.784 [2024-07-25 10:56:00.505800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.784 [2024-07-25 10:56:00.505836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.784 [2024-07-25 10:56:00.505861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.784 [2024-07-25 10:56:00.510744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.784 [2024-07-25 10:56:00.510779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.784 [2024-07-25 10:56:00.510793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:30.784 [2024-07-25 10:56:00.515713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:30.784 [2024-07-25 10:56:00.515750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.784 [2024-07-25 10:56:00.515763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.044 [2024-07-25 10:56:00.520645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.044 [2024-07-25 10:56:00.520680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.044 [2024-07-25 10:56:00.520693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.044 [2024-07-25 10:56:00.525662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.044 [2024-07-25 10:56:00.525698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.044 [2024-07-25 10:56:00.525712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.044 [2024-07-25 10:56:00.530686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.044 [2024-07-25 10:56:00.530721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.044 [2024-07-25 10:56:00.530734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.044 [2024-07-25 10:56:00.535723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.044 [2024-07-25 10:56:00.535760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.044 [2024-07-25 10:56:00.535773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.044 [2024-07-25 10:56:00.540765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.044 [2024-07-25 10:56:00.540802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.044 [2024-07-25 10:56:00.540815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.045 [2024-07-25 10:56:00.545632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.045 [2024-07-25 10:56:00.545667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.045 [2024-07-25 10:56:00.545681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.045 [2024-07-25 10:56:00.550614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.045 [2024-07-25 10:56:00.550650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.045 [2024-07-25 10:56:00.550663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.045 [2024-07-25 10:56:00.555526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.045 [2024-07-25 10:56:00.555580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.045 [2024-07-25 10:56:00.555593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.045 [2024-07-25 10:56:00.560486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.045 [2024-07-25 10:56:00.560521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.045 [2024-07-25 10:56:00.560534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.045 [2024-07-25 10:56:00.565373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.045 [2024-07-25 10:56:00.565409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.045 [2024-07-25 10:56:00.565424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.045 [2024-07-25 10:56:00.570256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.045 [2024-07-25 10:56:00.570292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.045 [2024-07-25 10:56:00.570305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.045 [2024-07-25 10:56:00.575244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.045 [2024-07-25 10:56:00.575293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.045 [2024-07-25 10:56:00.575322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.045 [2024-07-25 10:56:00.580379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.045 [2024-07-25 10:56:00.580427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.045 [2024-07-25 10:56:00.580441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.045 [2024-07-25 10:56:00.585258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.045 [2024-07-25 10:56:00.585307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.045 [2024-07-25 10:56:00.585320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.045 [2024-07-25 10:56:00.590250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.045 [2024-07-25 10:56:00.590287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.045 [2024-07-25 10:56:00.590300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.045 [2024-07-25 10:56:00.595047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.045 [2024-07-25 10:56:00.595081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.045 [2024-07-25 10:56:00.595094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.045 [2024-07-25 10:56:00.599975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.045 [2024-07-25 10:56:00.600009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.045 [2024-07-25 10:56:00.600022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.045 [2024-07-25 10:56:00.604879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.045 [2024-07-25 10:56:00.604913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.045 [2024-07-25 10:56:00.604926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.045 [2024-07-25 10:56:00.609688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.045 [2024-07-25 10:56:00.609723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.045 [2024-07-25 10:56:00.609737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.045 [2024-07-25 10:56:00.614556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.045 [2024-07-25 10:56:00.614591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.045 [2024-07-25 10:56:00.614603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.045 [2024-07-25 10:56:00.619582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.045 [2024-07-25 10:56:00.619617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.045 [2024-07-25 10:56:00.619630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.045 [2024-07-25 10:56:00.624465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.045 [2024-07-25 10:56:00.624513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.045 [2024-07-25 10:56:00.624525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.045 [2024-07-25 10:56:00.629575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.045 [2024-07-25 10:56:00.629610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.045 [2024-07-25 10:56:00.629623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.045 [2024-07-25 10:56:00.634772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.045 [2024-07-25 10:56:00.634809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.045 [2024-07-25 10:56:00.634823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.045 [2024-07-25 10:56:00.639623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.045 [2024-07-25 10:56:00.639659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.045 [2024-07-25 10:56:00.639672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.045 [2024-07-25 10:56:00.644564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.045 [2024-07-25 10:56:00.644602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.045 [2024-07-25 10:56:00.644616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.045 [2024-07-25 10:56:00.649479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.045 [2024-07-25 10:56:00.649514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.045 [2024-07-25 10:56:00.649527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.045 [2024-07-25 10:56:00.654389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.045 [2024-07-25 10:56:00.654427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.045 [2024-07-25 10:56:00.654440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.045 [2024-07-25 10:56:00.659259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.045 [2024-07-25 10:56:00.659294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.045 [2024-07-25 10:56:00.659306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.045 [2024-07-25 10:56:00.664269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.045 [2024-07-25 10:56:00.664318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.045 [2024-07-25 10:56:00.664332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.045 [2024-07-25 10:56:00.669224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.045 [2024-07-25 10:56:00.669260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.045 [2024-07-25 10:56:00.669272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.045 [2024-07-25 10:56:00.674266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.045 [2024-07-25 10:56:00.674301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.045 [2024-07-25 10:56:00.674315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.046 [2024-07-25 10:56:00.679224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.046 [2024-07-25 10:56:00.679273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.046 [2024-07-25 10:56:00.679286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.046 [2024-07-25 10:56:00.684135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.046 [2024-07-25 10:56:00.684169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.046 [2024-07-25 10:56:00.684188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.046 [2024-07-25 10:56:00.689053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.046 [2024-07-25 10:56:00.689087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.046 [2024-07-25 10:56:00.689099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.046 [2024-07-25 10:56:00.694011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.046 [2024-07-25 10:56:00.694086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.046 [2024-07-25 10:56:00.694100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.046 [2024-07-25 10:56:00.698859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.046 [2024-07-25 10:56:00.698903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.046 [2024-07-25 10:56:00.698916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.046 [2024-07-25 10:56:00.703729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.046 [2024-07-25 10:56:00.703764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.046 [2024-07-25 10:56:00.703778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.046 [2024-07-25 10:56:00.708729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.046 [2024-07-25 10:56:00.708764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.046 [2024-07-25 10:56:00.708778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.046 [2024-07-25 10:56:00.713789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.046 [2024-07-25 10:56:00.713825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.046 [2024-07-25 10:56:00.713838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.046 [2024-07-25 10:56:00.718839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.046 [2024-07-25 10:56:00.718883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.046 [2024-07-25 10:56:00.718896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.046 [2024-07-25 10:56:00.723680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.046 [2024-07-25 10:56:00.723728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.046 [2024-07-25 10:56:00.723741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.046 [2024-07-25 10:56:00.728687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.046 [2024-07-25 10:56:00.728721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.046 [2024-07-25 10:56:00.728734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.046 [2024-07-25 10:56:00.733620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.046 [2024-07-25 10:56:00.733655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.046 [2024-07-25 10:56:00.733668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:31.046 [2024-07-25 10:56:00.738512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.046 [2024-07-25 10:56:00.738547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.046 [2024-07-25 10:56:00.738560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:31.046 [2024-07-25 10:56:00.743486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.046 [2024-07-25 10:56:00.743522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.046 [2024-07-25 10:56:00.743538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.046 [2024-07-25 10:56:00.748313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2261200) 00:17:31.046 [2024-07-25 10:56:00.748349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.046 [2024-07-25 10:56:00.748362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.046 00:17:31.046 Latency(us) 00:17:31.046 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.046 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:31.046 nvme0n1 : 2.00 6268.29 783.54 0.00 0.00 2548.78 2263.97 9770.82 00:17:31.046 =================================================================================================================== 00:17:31.046 Total : 6268.29 783.54 0.00 0.00 2548.78 2263.97 9770.82 00:17:31.046 0 00:17:31.046 10:56:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:31.046 10:56:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:31.046 10:56:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:31.046 | .driver_specific 00:17:31.046 | .nvme_error 00:17:31.046 | .status_code 00:17:31.046 | .command_transient_transport_error' 00:17:31.046 10:56:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:31.305 10:56:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 405 > 0 )) 00:17:31.305 10:56:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79978 00:17:31.305 10:56:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 79978 ']' 00:17:31.305 10:56:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 79978 00:17:31.305 10:56:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:17:31.305 10:56:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:31.305 10:56:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79978 00:17:31.305 10:56:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:31.305 killing process with pid 79978 00:17:31.305 Received shutdown signal, test time was about 2.000000 seconds 00:17:31.305 00:17:31.305 Latency(us) 00:17:31.305 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.305 =================================================================================================================== 00:17:31.305 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:31.305 10:56:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:31.305 10:56:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79978' 00:17:31.305 10:56:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 79978 00:17:31.305 10:56:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 79978 00:17:31.874 10:56:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:17:31.874 10:56:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:31.874 10:56:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:17:31.874 10:56:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:17:31.874 10:56:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:17:31.874 10:56:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80043 00:17:31.874 10:56:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80043 /var/tmp/bperf.sock 00:17:31.874 10:56:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 80043 ']' 00:17:31.874 10:56:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:17:31.874 10:56:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:31.874 10:56:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:31.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:31.874 10:56:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:31.874 10:56:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:31.874 10:56:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:31.874 [2024-07-25 10:56:01.405265] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:31.874 [2024-07-25 10:56:01.405372] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80043 ] 00:17:31.874 [2024-07-25 10:56:01.539440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.132 [2024-07-25 10:56:01.681603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:32.132 [2024-07-25 10:56:01.756859] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:32.699 10:56:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:32.699 10:56:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:17:32.699 10:56:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:32.699 10:56:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:32.957 10:56:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:32.957 10:56:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.957 10:56:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:32.957 10:56:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.957 10:56:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:32.957 10:56:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:33.215 nvme0n1 00:17:33.215 10:56:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:33.216 10:56:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.216 10:56:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:33.216 10:56:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.216 10:56:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:33.216 10:56:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:33.474 Running I/O for 2 seconds... 00:17:33.474 [2024-07-25 10:56:03.082657] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190fef90 00:17:33.474 [2024-07-25 10:56:03.085315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.475 [2024-07-25 10:56:03.085391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.475 [2024-07-25 10:56:03.098906] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190feb58 00:17:33.475 [2024-07-25 10:56:03.101384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.475 [2024-07-25 10:56:03.101433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:33.475 [2024-07-25 10:56:03.114832] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190fe2e8 00:17:33.475 [2024-07-25 10:56:03.117393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.475 [2024-07-25 10:56:03.117443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:33.475 [2024-07-25 10:56:03.130806] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190fda78 00:17:33.475 [2024-07-25 10:56:03.133248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.475 [2024-07-25 10:56:03.133283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:33.475 [2024-07-25 10:56:03.146607] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190fd208 00:17:33.475 [2024-07-25 10:56:03.149202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.475 [2024-07-25 10:56:03.149247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:33.475 [2024-07-25 10:56:03.162663] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190fc998 00:17:33.475 [2024-07-25 10:56:03.165158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.475 [2024-07-25 10:56:03.165196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:33.475 [2024-07-25 10:56:03.178806] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190fc128 00:17:33.475 [2024-07-25 10:56:03.181204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.475 [2024-07-25 10:56:03.181239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:33.475 [2024-07-25 10:56:03.194966] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190fb8b8 00:17:33.475 [2024-07-25 10:56:03.197416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.475 [2024-07-25 10:56:03.197452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:33.475 [2024-07-25 10:56:03.210838] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190fb048 00:17:33.734 [2024-07-25 10:56:03.213133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.734 [2024-07-25 10:56:03.213175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:33.734 [2024-07-25 10:56:03.226500] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190fa7d8 00:17:33.734 [2024-07-25 10:56:03.228806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.734 [2024-07-25 10:56:03.228862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:33.734 [2024-07-25 10:56:03.242317] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190f9f68 00:17:33.734 [2024-07-25 10:56:03.244553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.734 [2024-07-25 10:56:03.244603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:33.734 [2024-07-25 10:56:03.258189] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190f96f8 00:17:33.734 [2024-07-25 10:56:03.260480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.734 [2024-07-25 10:56:03.260530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:33.734 [2024-07-25 10:56:03.273924] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190f8e88 00:17:33.734 [2024-07-25 10:56:03.276162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.734 [2024-07-25 10:56:03.276211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:33.734 [2024-07-25 10:56:03.289805] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190f8618 00:17:33.734 [2024-07-25 10:56:03.292107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.734 [2024-07-25 10:56:03.292154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:33.734 [2024-07-25 10:56:03.305469] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190f7da8 00:17:33.734 [2024-07-25 10:56:03.307835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:15730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.734 [2024-07-25 10:56:03.307900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:33.734 [2024-07-25 10:56:03.321835] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190f7538 00:17:33.734 [2024-07-25 10:56:03.324068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.734 [2024-07-25 10:56:03.324102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:33.734 [2024-07-25 10:56:03.338066] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190f6cc8 00:17:33.734 [2024-07-25 10:56:03.340362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.734 [2024-07-25 10:56:03.340425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.734 [2024-07-25 10:56:03.354802] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190f6458 00:17:33.734 [2024-07-25 10:56:03.357040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:24747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.735 [2024-07-25 10:56:03.357078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:33.735 [2024-07-25 10:56:03.371361] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190f5be8 00:17:33.735 [2024-07-25 10:56:03.373611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.735 [2024-07-25 10:56:03.373648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:33.735 [2024-07-25 10:56:03.387990] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190f5378 00:17:33.735 [2024-07-25 10:56:03.390185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.735 [2024-07-25 10:56:03.390221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:33.735 [2024-07-25 10:56:03.404269] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190f4b08 00:17:33.735 [2024-07-25 10:56:03.406478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.735 [2024-07-25 10:56:03.406511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:33.735 [2024-07-25 10:56:03.420351] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190f4298 00:17:33.735 [2024-07-25 10:56:03.422487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.735 [2024-07-25 10:56:03.422519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:33.735 [2024-07-25 10:56:03.436763] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190f3a28 00:17:33.735 [2024-07-25 10:56:03.438982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.735 [2024-07-25 10:56:03.439030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:33.735 [2024-07-25 10:56:03.453230] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190f31b8 00:17:33.735 [2024-07-25 10:56:03.455335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.735 [2024-07-25 10:56:03.455370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:33.735 [2024-07-25 10:56:03.469769] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190f2948 00:17:33.994 [2024-07-25 10:56:03.471886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.994 [2024-07-25 10:56:03.471919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:33.994 [2024-07-25 10:56:03.486338] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190f20d8 00:17:33.994 [2024-07-25 10:56:03.488382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.994 [2024-07-25 10:56:03.488415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:33.994 [2024-07-25 10:56:03.502850] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190f1868 00:17:33.994 [2024-07-25 10:56:03.504937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.994 [2024-07-25 10:56:03.504972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:33.994 [2024-07-25 10:56:03.519376] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190f0ff8 00:17:33.994 [2024-07-25 10:56:03.521505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.994 [2024-07-25 10:56:03.521570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:33.994 [2024-07-25 10:56:03.536145] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190f0788 00:17:33.994 [2024-07-25 10:56:03.538190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.994 [2024-07-25 10:56:03.538241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:33.994 [2024-07-25 10:56:03.552071] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190eff18 00:17:33.994 [2024-07-25 10:56:03.554080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.994 [2024-07-25 10:56:03.554115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:33.994 [2024-07-25 10:56:03.568326] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190ef6a8 00:17:33.994 [2024-07-25 10:56:03.570446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.994 [2024-07-25 10:56:03.570493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:33.994 [2024-07-25 10:56:03.584542] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190eee38 00:17:33.994 [2024-07-25 10:56:03.586520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.994 [2024-07-25 10:56:03.586564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:33.994 [2024-07-25 10:56:03.600777] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190ee5c8 00:17:33.994 [2024-07-25 10:56:03.602720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.994 [2024-07-25 10:56:03.602755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.994 [2024-07-25 10:56:03.617415] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190edd58 00:17:33.994 [2024-07-25 10:56:03.619319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.994 [2024-07-25 10:56:03.619364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:33.994 [2024-07-25 10:56:03.633706] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190ed4e8 00:17:33.994 [2024-07-25 10:56:03.635591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.994 [2024-07-25 10:56:03.635626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:33.994 [2024-07-25 10:56:03.649756] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190ecc78 00:17:33.994 [2024-07-25 10:56:03.651614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.994 [2024-07-25 10:56:03.651647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:33.994 [2024-07-25 10:56:03.665989] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190ec408 00:17:33.994 [2024-07-25 10:56:03.667904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.994 [2024-07-25 10:56:03.667943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:33.994 [2024-07-25 10:56:03.682280] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190ebb98 00:17:33.994 [2024-07-25 10:56:03.684131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.994 [2024-07-25 10:56:03.684165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:33.994 [2024-07-25 10:56:03.698592] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190eb328 00:17:33.994 [2024-07-25 10:56:03.700396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.994 [2024-07-25 10:56:03.700433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:33.994 [2024-07-25 10:56:03.714936] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190eaab8 00:17:33.994 [2024-07-25 10:56:03.716713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:33.994 [2024-07-25 10:56:03.716753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:34.253 [2024-07-25 10:56:03.731370] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190ea248 00:17:34.253 [2024-07-25 10:56:03.733255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.253 [2024-07-25 10:56:03.733305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:34.253 [2024-07-25 10:56:03.748007] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190e99d8 00:17:34.253 [2024-07-25 10:56:03.749720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.253 [2024-07-25 10:56:03.749757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:34.253 [2024-07-25 10:56:03.764458] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190e9168 00:17:34.253 [2024-07-25 10:56:03.766168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.253 [2024-07-25 10:56:03.766207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:34.253 [2024-07-25 10:56:03.780573] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190e88f8 00:17:34.253 [2024-07-25 10:56:03.782285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.253 [2024-07-25 10:56:03.782323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:34.253 [2024-07-25 10:56:03.797076] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190e8088 00:17:34.253 [2024-07-25 10:56:03.798781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.253 [2024-07-25 10:56:03.798829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:34.253 [2024-07-25 10:56:03.813188] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190e7818 00:17:34.253 [2024-07-25 10:56:03.814955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.253 [2024-07-25 10:56:03.814992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:34.253 [2024-07-25 10:56:03.829405] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190e6fa8 00:17:34.253 [2024-07-25 10:56:03.831003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.253 [2024-07-25 10:56:03.831039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:34.253 [2024-07-25 10:56:03.845295] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190e6738 00:17:34.253 [2024-07-25 10:56:03.846941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.253 [2024-07-25 10:56:03.846978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:34.253 [2024-07-25 10:56:03.861239] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190e5ec8 00:17:34.253 [2024-07-25 10:56:03.862836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.253 [2024-07-25 10:56:03.862896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.253 [2024-07-25 10:56:03.876956] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190e5658 00:17:34.253 [2024-07-25 10:56:03.878518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.253 [2024-07-25 10:56:03.878557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:34.253 [2024-07-25 10:56:03.892494] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190e4de8 00:17:34.253 [2024-07-25 10:56:03.893944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.253 [2024-07-25 10:56:03.893981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:34.253 [2024-07-25 10:56:03.907833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190e4578 00:17:34.253 [2024-07-25 10:56:03.909314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.253 [2024-07-25 10:56:03.909351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:34.253 [2024-07-25 10:56:03.923318] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190e3d08 00:17:34.253 [2024-07-25 10:56:03.924738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.253 [2024-07-25 10:56:03.924775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:34.253 [2024-07-25 10:56:03.938767] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190e3498 00:17:34.253 [2024-07-25 10:56:03.940311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.253 [2024-07-25 10:56:03.940346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:34.253 [2024-07-25 10:56:03.954378] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190e2c28 00:17:34.254 [2024-07-25 10:56:03.955904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.254 [2024-07-25 10:56:03.955947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:34.254 [2024-07-25 10:56:03.970663] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190e23b8 00:17:34.254 [2024-07-25 10:56:03.972148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.254 [2024-07-25 10:56:03.972182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:34.254 [2024-07-25 10:56:03.986406] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190e1b48 00:17:34.254 [2024-07-25 10:56:03.987802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.254 [2024-07-25 10:56:03.987837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:34.512 [2024-07-25 10:56:04.002222] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190e12d8 00:17:34.512 [2024-07-25 10:56:04.003635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.512 [2024-07-25 10:56:04.003670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:34.512 [2024-07-25 10:56:04.018011] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190e0a68 00:17:34.512 [2024-07-25 10:56:04.019412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.512 [2024-07-25 10:56:04.019461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:34.512 [2024-07-25 10:56:04.034504] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190e01f8 00:17:34.512 [2024-07-25 10:56:04.035960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.512 [2024-07-25 10:56:04.035997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:34.512 [2024-07-25 10:56:04.051003] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190df988 00:17:34.512 [2024-07-25 10:56:04.052387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.512 [2024-07-25 10:56:04.052424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:34.512 [2024-07-25 10:56:04.067467] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190df118 00:17:34.513 [2024-07-25 10:56:04.068870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.513 [2024-07-25 10:56:04.068966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:34.513 [2024-07-25 10:56:04.083998] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190de8a8 00:17:34.513 [2024-07-25 10:56:04.085405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.513 [2024-07-25 10:56:04.085441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:34.513 [2024-07-25 10:56:04.100452] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190de038 00:17:34.513 [2024-07-25 10:56:04.101843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.513 [2024-07-25 10:56:04.101890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:34.513 [2024-07-25 10:56:04.123839] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190de038 00:17:34.513 [2024-07-25 10:56:04.126385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.513 [2024-07-25 10:56:04.126423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.513 [2024-07-25 10:56:04.139922] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190de8a8 00:17:34.513 [2024-07-25 10:56:04.142520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.513 [2024-07-25 10:56:04.142556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:34.513 [2024-07-25 10:56:04.156065] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190df118 00:17:34.513 [2024-07-25 10:56:04.158628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.513 [2024-07-25 10:56:04.158663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:34.513 [2024-07-25 10:56:04.172119] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190df988 00:17:34.513 [2024-07-25 10:56:04.174629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.513 [2024-07-25 10:56:04.174664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:34.513 [2024-07-25 10:56:04.188367] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190e01f8 00:17:34.513 [2024-07-25 10:56:04.190853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.513 [2024-07-25 10:56:04.190925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:34.513 [2024-07-25 10:56:04.204585] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190e0a68 00:17:34.513 [2024-07-25 10:56:04.207072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.513 [2024-07-25 10:56:04.207107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:34.513 [2024-07-25 10:56:04.220731] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190e12d8 00:17:34.513 [2024-07-25 10:56:04.223180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.513 [2024-07-25 10:56:04.223215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:34.513 [2024-07-25 10:56:04.236620] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190e1b48 00:17:34.513 [2024-07-25 10:56:04.239039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.513 [2024-07-25 10:56:04.239074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:34.772 [2024-07-25 10:56:04.252825] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190e23b8 00:17:34.772 [2024-07-25 10:56:04.255374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.772 [2024-07-25 10:56:04.255409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:34.772 [2024-07-25 10:56:04.269322] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190e2c28 00:17:34.772 [2024-07-25 10:56:04.271757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.772 [2024-07-25 10:56:04.271795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:34.772 [2024-07-25 10:56:04.285751] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190e3498 00:17:34.772 [2024-07-25 10:56:04.288131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.772 [2024-07-25 10:56:04.288171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:34.772 [2024-07-25 10:56:04.301941] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190e3d08 00:17:34.772 [2024-07-25 10:56:04.304258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.772 [2024-07-25 10:56:04.304294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:34.772 [2024-07-25 10:56:04.318010] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190e4578 00:17:34.772 [2024-07-25 10:56:04.320297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.772 [2024-07-25 10:56:04.320332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:34.772 [2024-07-25 10:56:04.334074] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190e4de8 00:17:34.772 [2024-07-25 10:56:04.336369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.772 [2024-07-25 10:56:04.336407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:34.772 [2024-07-25 10:56:04.350056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190e5658 00:17:34.772 [2024-07-25 10:56:04.352340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.772 [2024-07-25 10:56:04.352376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:34.772 [2024-07-25 10:56:04.366222] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190e5ec8 00:17:34.772 [2024-07-25 10:56:04.368478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.772 [2024-07-25 10:56:04.368517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:34.772 [2024-07-25 10:56:04.382303] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190e6738 00:17:34.772 [2024-07-25 10:56:04.384531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.772 [2024-07-25 10:56:04.384570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:34.772 [2024-07-25 10:56:04.399221] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190e6fa8 00:17:34.772 [2024-07-25 10:56:04.401487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:10531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.772 [2024-07-25 10:56:04.401526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:34.772 [2024-07-25 10:56:04.415617] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190e7818 00:17:34.772 [2024-07-25 10:56:04.417820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.772 [2024-07-25 10:56:04.417894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:34.772 [2024-07-25 10:56:04.431900] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190e8088 00:17:34.772 [2024-07-25 10:56:04.434113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.773 [2024-07-25 10:56:04.434150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:34.773 [2024-07-25 10:56:04.447760] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190e88f8 00:17:34.773 [2024-07-25 10:56:04.449989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.773 [2024-07-25 10:56:04.450033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:34.773 [2024-07-25 10:56:04.464100] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190e9168 00:17:34.773 [2024-07-25 10:56:04.466260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.773 [2024-07-25 10:56:04.466297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:34.773 [2024-07-25 10:56:04.480126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190e99d8 00:17:34.773 [2024-07-25 10:56:04.482150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.773 [2024-07-25 10:56:04.482186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:34.773 [2024-07-25 10:56:04.496380] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190ea248 00:17:34.773 [2024-07-25 10:56:04.498448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.773 [2024-07-25 10:56:04.498499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:35.032 [2024-07-25 10:56:04.512386] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190eaab8 00:17:35.032 [2024-07-25 10:56:04.514570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.032 [2024-07-25 10:56:04.514606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:35.032 [2024-07-25 10:56:04.528534] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190eb328 00:17:35.032 [2024-07-25 10:56:04.530605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.032 [2024-07-25 10:56:04.530642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:35.032 [2024-07-25 10:56:04.544593] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190ebb98 00:17:35.032 [2024-07-25 10:56:04.546606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.032 [2024-07-25 10:56:04.546646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:35.032 [2024-07-25 10:56:04.560634] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190ec408 00:17:35.032 [2024-07-25 10:56:04.562626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.032 [2024-07-25 10:56:04.562665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:35.032 [2024-07-25 10:56:04.576584] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190ecc78 00:17:35.032 [2024-07-25 10:56:04.578574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.032 [2024-07-25 10:56:04.578626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:35.032 [2024-07-25 10:56:04.592521] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190ed4e8 00:17:35.032 [2024-07-25 10:56:04.594628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.032 [2024-07-25 10:56:04.594665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:35.032 [2024-07-25 10:56:04.608723] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190edd58 00:17:35.032 [2024-07-25 10:56:04.610736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.032 [2024-07-25 10:56:04.610771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:35.032 [2024-07-25 10:56:04.624818] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190ee5c8 00:17:35.032 [2024-07-25 10:56:04.626776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.032 [2024-07-25 10:56:04.626815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:35.032 [2024-07-25 10:56:04.640624] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190eee38 00:17:35.032 [2024-07-25 10:56:04.642531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.032 [2024-07-25 10:56:04.642566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:35.032 [2024-07-25 10:56:04.656545] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190ef6a8 00:17:35.032 [2024-07-25 10:56:04.658482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.032 [2024-07-25 10:56:04.658518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:35.032 [2024-07-25 10:56:04.672281] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190eff18 00:17:35.032 [2024-07-25 10:56:04.674078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.032 [2024-07-25 10:56:04.674115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:35.032 [2024-07-25 10:56:04.687797] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190f0788 00:17:35.032 [2024-07-25 10:56:04.689647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.032 [2024-07-25 10:56:04.689682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:35.032 [2024-07-25 10:56:04.703573] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190f0ff8 00:17:35.032 [2024-07-25 10:56:04.705402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.032 [2024-07-25 10:56:04.705436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:35.032 [2024-07-25 10:56:04.719404] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190f1868 00:17:35.032 [2024-07-25 10:56:04.721246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.032 [2024-07-25 10:56:04.721284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:35.032 [2024-07-25 10:56:04.735403] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190f20d8 00:17:35.032 [2024-07-25 10:56:04.737184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.032 [2024-07-25 10:56:04.737235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:35.032 [2024-07-25 10:56:04.751477] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190f2948 00:17:35.032 [2024-07-25 10:56:04.753230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.032 [2024-07-25 10:56:04.753264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:35.032 [2024-07-25 10:56:04.767192] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190f31b8 00:17:35.033 [2024-07-25 10:56:04.768901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.033 [2024-07-25 10:56:04.768961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:35.292 [2024-07-25 10:56:04.783222] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190f3a28 00:17:35.292 [2024-07-25 10:56:04.784946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.292 [2024-07-25 10:56:04.784984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:35.292 [2024-07-25 10:56:04.799419] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190f4298 00:17:35.292 [2024-07-25 10:56:04.801274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.292 [2024-07-25 10:56:04.801312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:35.292 [2024-07-25 10:56:04.816194] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190f4b08 00:17:35.292 [2024-07-25 10:56:04.817929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.292 [2024-07-25 10:56:04.817965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:35.292 [2024-07-25 10:56:04.832617] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190f5378 00:17:35.292 [2024-07-25 10:56:04.834325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.292 [2024-07-25 10:56:04.834375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:35.292 [2024-07-25 10:56:04.848617] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190f5be8 00:17:35.292 [2024-07-25 10:56:04.850274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.292 [2024-07-25 10:56:04.850308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:35.292 [2024-07-25 10:56:04.864480] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190f6458 00:17:35.292 [2024-07-25 10:56:04.866132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.292 [2024-07-25 10:56:04.866165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:35.292 [2024-07-25 10:56:04.880410] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190f6cc8 00:17:35.292 [2024-07-25 10:56:04.882006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:17311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.292 [2024-07-25 10:56:04.882066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:35.292 [2024-07-25 10:56:04.896276] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190f7538 00:17:35.292 [2024-07-25 10:56:04.897858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:3068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.292 [2024-07-25 10:56:04.897908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:35.292 [2024-07-25 10:56:04.912389] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190f7da8 00:17:35.292 [2024-07-25 10:56:04.914076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.292 [2024-07-25 10:56:04.914110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:35.292 [2024-07-25 10:56:04.928488] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190f8618 00:17:35.292 [2024-07-25 10:56:04.930120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.292 [2024-07-25 10:56:04.930158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:35.292 [2024-07-25 10:56:04.944395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190f8e88 00:17:35.292 [2024-07-25 10:56:04.945936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.292 [2024-07-25 10:56:04.945969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:35.292 [2024-07-25 10:56:04.960482] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190f96f8 00:17:35.292 [2024-07-25 10:56:04.962006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.292 [2024-07-25 10:56:04.962065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:35.292 [2024-07-25 10:56:04.976651] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190f9f68 00:17:35.292 [2024-07-25 10:56:04.978201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.292 [2024-07-25 10:56:04.978237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:35.292 [2024-07-25 10:56:04.992827] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190fa7d8 00:17:35.292 [2024-07-25 10:56:04.994338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.292 [2024-07-25 10:56:04.994373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:35.292 [2024-07-25 10:56:05.008750] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190fb048 00:17:35.292 [2024-07-25 10:56:05.010260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.292 [2024-07-25 10:56:05.010309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:35.292 [2024-07-25 10:56:05.024508] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190fb8b8 00:17:35.292 [2024-07-25 10:56:05.025943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.292 [2024-07-25 10:56:05.025976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:35.551 [2024-07-25 10:56:05.040303] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190fc128 00:17:35.551 [2024-07-25 10:56:05.041746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.551 [2024-07-25 10:56:05.041795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:35.551 [2024-07-25 10:56:05.056240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd2d650) with pdu=0x2000190fc998 00:17:35.551 [2024-07-25 10:56:05.057628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.551 [2024-07-25 10:56:05.057675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:35.551 00:17:35.551 Latency(us) 00:17:35.551 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.551 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:35.551 nvme0n1 : 2.01 15690.81 61.29 0.00 0.00 8150.34 3991.74 31457.28 00:17:35.551 =================================================================================================================== 00:17:35.551 Total : 15690.81 61.29 0.00 0.00 8150.34 3991.74 31457.28 00:17:35.551 0 00:17:35.551 10:56:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:35.551 10:56:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:35.551 10:56:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:35.551 | .driver_specific 00:17:35.551 | .nvme_error 00:17:35.551 | .status_code 00:17:35.551 | .command_transient_transport_error' 00:17:35.551 10:56:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:35.810 10:56:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 123 > 0 )) 00:17:35.810 10:56:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80043 00:17:35.810 10:56:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 80043 ']' 00:17:35.810 10:56:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 80043 00:17:35.810 10:56:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:17:35.810 10:56:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:35.810 10:56:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80043 00:17:35.810 10:56:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:35.810 killing process with pid 80043 00:17:35.810 10:56:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:35.810 10:56:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80043' 00:17:35.810 10:56:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 80043 00:17:35.810 Received shutdown signal, test time was about 2.000000 seconds 00:17:35.810 00:17:35.810 Latency(us) 00:17:35.810 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.810 =================================================================================================================== 00:17:35.810 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:35.810 10:56:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 80043 00:17:36.069 10:56:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:17:36.069 10:56:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:36.069 10:56:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:17:36.069 10:56:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:17:36.069 10:56:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:17:36.069 10:56:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80099 00:17:36.069 10:56:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80099 /var/tmp/bperf.sock 00:17:36.069 10:56:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 80099 ']' 00:17:36.069 10:56:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:17:36.069 10:56:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:36.069 10:56:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:36.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:36.069 10:56:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:36.069 10:56:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:36.069 10:56:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:36.069 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:36.069 Zero copy mechanism will not be used. 00:17:36.069 [2024-07-25 10:56:05.767253] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:36.069 [2024-07-25 10:56:05.767348] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80099 ] 00:17:36.328 [2024-07-25 10:56:05.901931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.328 [2024-07-25 10:56:06.045998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:36.587 [2024-07-25 10:56:06.119965] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:37.153 10:56:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:37.153 10:56:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:17:37.153 10:56:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:37.153 10:56:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:37.412 10:56:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:37.412 10:56:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.412 10:56:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:37.412 10:56:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.412 10:56:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:37.412 10:56:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:37.672 nvme0n1 00:17:37.672 10:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:37.672 10:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.672 10:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:37.672 10:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.672 10:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:37.672 10:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:37.932 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:37.932 Zero copy mechanism will not be used. 00:17:37.932 Running I/O for 2 seconds... 00:17:37.932 [2024-07-25 10:56:07.435719] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:37.932 [2024-07-25 10:56:07.436053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.932 [2024-07-25 10:56:07.436098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.932 [2024-07-25 10:56:07.441611] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:37.932 [2024-07-25 10:56:07.441927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.932 [2024-07-25 10:56:07.441962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.932 [2024-07-25 10:56:07.447357] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:37.932 [2024-07-25 10:56:07.447646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.932 [2024-07-25 10:56:07.447679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.932 [2024-07-25 10:56:07.453100] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:37.932 [2024-07-25 10:56:07.453394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.932 [2024-07-25 10:56:07.453426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.932 [2024-07-25 10:56:07.458929] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:37.932 [2024-07-25 10:56:07.459228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.932 [2024-07-25 10:56:07.459259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.932 [2024-07-25 10:56:07.464658] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:37.932 [2024-07-25 10:56:07.464963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.932 [2024-07-25 10:56:07.464994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.932 [2024-07-25 10:56:07.470485] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:37.932 [2024-07-25 10:56:07.470780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.932 [2024-07-25 10:56:07.470812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.932 [2024-07-25 10:56:07.476312] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:37.932 [2024-07-25 10:56:07.476621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.932 [2024-07-25 10:56:07.476654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.932 [2024-07-25 10:56:07.482254] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:37.932 [2024-07-25 10:56:07.482588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.932 [2024-07-25 10:56:07.482620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.932 [2024-07-25 10:56:07.488085] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:37.932 [2024-07-25 10:56:07.488372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.932 [2024-07-25 10:56:07.488402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.932 [2024-07-25 10:56:07.493984] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:37.932 [2024-07-25 10:56:07.494316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.932 [2024-07-25 10:56:07.494359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.932 [2024-07-25 10:56:07.499999] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:37.932 [2024-07-25 10:56:07.500329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.932 [2024-07-25 10:56:07.500360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.932 [2024-07-25 10:56:07.505866] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:37.932 [2024-07-25 10:56:07.506240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.932 [2024-07-25 10:56:07.506272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.932 [2024-07-25 10:56:07.511865] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:37.932 [2024-07-25 10:56:07.512184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.932 [2024-07-25 10:56:07.512209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.932 [2024-07-25 10:56:07.517696] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:37.932 [2024-07-25 10:56:07.518018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.932 [2024-07-25 10:56:07.518074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.932 [2024-07-25 10:56:07.523663] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:37.932 [2024-07-25 10:56:07.523996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.932 [2024-07-25 10:56:07.524049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.932 [2024-07-25 10:56:07.529587] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:37.932 [2024-07-25 10:56:07.529913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.932 [2024-07-25 10:56:07.529942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.932 [2024-07-25 10:56:07.535518] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:37.932 [2024-07-25 10:56:07.535844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.932 [2024-07-25 10:56:07.535885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.932 [2024-07-25 10:56:07.541398] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:37.932 [2024-07-25 10:56:07.541705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.932 [2024-07-25 10:56:07.541736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.932 [2024-07-25 10:56:07.547375] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:37.932 [2024-07-25 10:56:07.547702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.932 [2024-07-25 10:56:07.547733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.932 [2024-07-25 10:56:07.553328] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:37.932 [2024-07-25 10:56:07.553667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.932 [2024-07-25 10:56:07.553692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.933 [2024-07-25 10:56:07.559423] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:37.933 [2024-07-25 10:56:07.559736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.933 [2024-07-25 10:56:07.559767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.933 [2024-07-25 10:56:07.565296] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:37.933 [2024-07-25 10:56:07.565619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.933 [2024-07-25 10:56:07.565643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.933 [2024-07-25 10:56:07.571245] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:37.933 [2024-07-25 10:56:07.571583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.933 [2024-07-25 10:56:07.571615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.933 [2024-07-25 10:56:07.577284] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:37.933 [2024-07-25 10:56:07.577592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.933 [2024-07-25 10:56:07.577624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.933 [2024-07-25 10:56:07.583328] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:37.933 [2024-07-25 10:56:07.583649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.933 [2024-07-25 10:56:07.583680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.933 [2024-07-25 10:56:07.589227] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:37.933 [2024-07-25 10:56:07.589513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.933 [2024-07-25 10:56:07.589560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.933 [2024-07-25 10:56:07.595033] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:37.933 [2024-07-25 10:56:07.595323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.933 [2024-07-25 10:56:07.595353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.933 [2024-07-25 10:56:07.600903] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:37.933 [2024-07-25 10:56:07.601257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.933 [2024-07-25 10:56:07.601288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.933 [2024-07-25 10:56:07.606914] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:37.933 [2024-07-25 10:56:07.607200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.933 [2024-07-25 10:56:07.607226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.933 [2024-07-25 10:56:07.612863] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:37.933 [2024-07-25 10:56:07.613168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.933 [2024-07-25 10:56:07.613197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.933 [2024-07-25 10:56:07.618717] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:37.933 [2024-07-25 10:56:07.619035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.933 [2024-07-25 10:56:07.619066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.933 [2024-07-25 10:56:07.624623] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:37.933 [2024-07-25 10:56:07.624944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.933 [2024-07-25 10:56:07.624975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.933 [2024-07-25 10:56:07.630519] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:37.933 [2024-07-25 10:56:07.630818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.933 [2024-07-25 10:56:07.630844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.933 [2024-07-25 10:56:07.636398] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:37.933 [2024-07-25 10:56:07.636720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.933 [2024-07-25 10:56:07.636752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.933 [2024-07-25 10:56:07.642343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:37.933 [2024-07-25 10:56:07.642649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.933 [2024-07-25 10:56:07.642679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.933 [2024-07-25 10:56:07.648109] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:37.933 [2024-07-25 10:56:07.648409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.933 [2024-07-25 10:56:07.648439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.933 [2024-07-25 10:56:07.654053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:37.933 [2024-07-25 10:56:07.654384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.933 [2024-07-25 10:56:07.654415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.933 [2024-07-25 10:56:07.659894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:37.933 [2024-07-25 10:56:07.660173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.933 [2024-07-25 10:56:07.660211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.933 [2024-07-25 10:56:07.665759] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:37.933 [2024-07-25 10:56:07.666102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.933 [2024-07-25 10:56:07.666135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.193 [2024-07-25 10:56:07.671720] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.193 [2024-07-25 10:56:07.672039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.193 [2024-07-25 10:56:07.672070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.193 [2024-07-25 10:56:07.677628] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.193 [2024-07-25 10:56:07.677943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.193 [2024-07-25 10:56:07.677976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.193 [2024-07-25 10:56:07.683573] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.193 [2024-07-25 10:56:07.683892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.193 [2024-07-25 10:56:07.683924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.193 [2024-07-25 10:56:07.689463] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.193 [2024-07-25 10:56:07.689772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.193 [2024-07-25 10:56:07.689802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.193 [2024-07-25 10:56:07.695437] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.193 [2024-07-25 10:56:07.695760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.193 [2024-07-25 10:56:07.695798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.193 [2024-07-25 10:56:07.701354] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.193 [2024-07-25 10:56:07.701667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.193 [2024-07-25 10:56:07.701698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.193 [2024-07-25 10:56:07.707388] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.193 [2024-07-25 10:56:07.707702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.193 [2024-07-25 10:56:07.707740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.193 [2024-07-25 10:56:07.713418] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.193 [2024-07-25 10:56:07.713755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.193 [2024-07-25 10:56:07.713786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.193 [2024-07-25 10:56:07.719414] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.193 [2024-07-25 10:56:07.719706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.193 [2024-07-25 10:56:07.719737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.193 [2024-07-25 10:56:07.725204] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.193 [2024-07-25 10:56:07.725525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.194 [2024-07-25 10:56:07.725556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.194 [2024-07-25 10:56:07.731020] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.194 [2024-07-25 10:56:07.731309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.194 [2024-07-25 10:56:07.731339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.194 [2024-07-25 10:56:07.736935] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.194 [2024-07-25 10:56:07.737259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.194 [2024-07-25 10:56:07.737294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.194 [2024-07-25 10:56:07.742813] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.194 [2024-07-25 10:56:07.743125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.194 [2024-07-25 10:56:07.743157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.194 [2024-07-25 10:56:07.748784] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.194 [2024-07-25 10:56:07.749112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.194 [2024-07-25 10:56:07.749142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.194 [2024-07-25 10:56:07.754689] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.194 [2024-07-25 10:56:07.755004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.194 [2024-07-25 10:56:07.755036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.194 [2024-07-25 10:56:07.760672] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.194 [2024-07-25 10:56:07.761004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.194 [2024-07-25 10:56:07.761052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.194 [2024-07-25 10:56:07.766511] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.194 [2024-07-25 10:56:07.766822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.194 [2024-07-25 10:56:07.766877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.194 [2024-07-25 10:56:07.772437] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.194 [2024-07-25 10:56:07.772763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.194 [2024-07-25 10:56:07.772795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.194 [2024-07-25 10:56:07.778343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.194 [2024-07-25 10:56:07.778669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.194 [2024-07-25 10:56:07.778700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.194 [2024-07-25 10:56:07.784322] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.194 [2024-07-25 10:56:07.784644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.194 [2024-07-25 10:56:07.784676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.194 [2024-07-25 10:56:07.790193] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.194 [2024-07-25 10:56:07.790517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.194 [2024-07-25 10:56:07.790560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.194 [2024-07-25 10:56:07.796341] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.194 [2024-07-25 10:56:07.796677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.194 [2024-07-25 10:56:07.796709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.194 [2024-07-25 10:56:07.802282] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.194 [2024-07-25 10:56:07.802599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.194 [2024-07-25 10:56:07.802630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.194 [2024-07-25 10:56:07.808217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.194 [2024-07-25 10:56:07.808518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.194 [2024-07-25 10:56:07.808551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.194 [2024-07-25 10:56:07.814207] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.194 [2024-07-25 10:56:07.814573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.194 [2024-07-25 10:56:07.814606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.194 [2024-07-25 10:56:07.820153] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.194 [2024-07-25 10:56:07.820441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.194 [2024-07-25 10:56:07.820471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.194 [2024-07-25 10:56:07.826143] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.194 [2024-07-25 10:56:07.826458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.194 [2024-07-25 10:56:07.826503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.194 [2024-07-25 10:56:07.832024] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.194 [2024-07-25 10:56:07.832311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.194 [2024-07-25 10:56:07.832341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.194 [2024-07-25 10:56:07.837906] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.194 [2024-07-25 10:56:07.838246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.194 [2024-07-25 10:56:07.838278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.194 [2024-07-25 10:56:07.843828] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.194 [2024-07-25 10:56:07.844152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.194 [2024-07-25 10:56:07.844182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.194 [2024-07-25 10:56:07.849653] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.194 [2024-07-25 10:56:07.849972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.194 [2024-07-25 10:56:07.850002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.194 [2024-07-25 10:56:07.855650] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.194 [2024-07-25 10:56:07.855958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.194 [2024-07-25 10:56:07.855983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.194 [2024-07-25 10:56:07.861498] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.194 [2024-07-25 10:56:07.861811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.194 [2024-07-25 10:56:07.861842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.194 [2024-07-25 10:56:07.867416] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.194 [2024-07-25 10:56:07.867724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.194 [2024-07-25 10:56:07.867755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.194 [2024-07-25 10:56:07.873293] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.194 [2024-07-25 10:56:07.873612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.194 [2024-07-25 10:56:07.873642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.194 [2024-07-25 10:56:07.879093] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.194 [2024-07-25 10:56:07.879396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.194 [2024-07-25 10:56:07.879421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.194 [2024-07-25 10:56:07.884999] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.195 [2024-07-25 10:56:07.885322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.195 [2024-07-25 10:56:07.885353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.195 [2024-07-25 10:56:07.890955] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.195 [2024-07-25 10:56:07.891241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.195 [2024-07-25 10:56:07.891271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.195 [2024-07-25 10:56:07.896957] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.195 [2024-07-25 10:56:07.897246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.195 [2024-07-25 10:56:07.897276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.195 [2024-07-25 10:56:07.902838] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.195 [2024-07-25 10:56:07.903151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.195 [2024-07-25 10:56:07.903180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.195 [2024-07-25 10:56:07.908742] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.195 [2024-07-25 10:56:07.909063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.195 [2024-07-25 10:56:07.909093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.195 [2024-07-25 10:56:07.914769] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.195 [2024-07-25 10:56:07.915087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.195 [2024-07-25 10:56:07.915118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.195 [2024-07-25 10:56:07.920784] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.195 [2024-07-25 10:56:07.921113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.195 [2024-07-25 10:56:07.921144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.195 [2024-07-25 10:56:07.926863] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.195 [2024-07-25 10:56:07.927197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.195 [2024-07-25 10:56:07.927235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.470 [2024-07-25 10:56:07.932617] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.470 [2024-07-25 10:56:07.932943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.470 [2024-07-25 10:56:07.932974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.470 [2024-07-25 10:56:07.938629] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.470 [2024-07-25 10:56:07.938944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.470 [2024-07-25 10:56:07.938975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.470 [2024-07-25 10:56:07.944595] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.470 [2024-07-25 10:56:07.944931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.470 [2024-07-25 10:56:07.944960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.470 [2024-07-25 10:56:07.950527] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.470 [2024-07-25 10:56:07.950834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.470 [2024-07-25 10:56:07.950889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.470 [2024-07-25 10:56:07.956423] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.470 [2024-07-25 10:56:07.956749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.470 [2024-07-25 10:56:07.956780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.470 [2024-07-25 10:56:07.962394] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.470 [2024-07-25 10:56:07.962706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.470 [2024-07-25 10:56:07.962737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.470 [2024-07-25 10:56:07.967907] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.470 [2024-07-25 10:56:07.967982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.470 [2024-07-25 10:56:07.968005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.470 [2024-07-25 10:56:07.973661] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.470 [2024-07-25 10:56:07.973737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.470 [2024-07-25 10:56:07.973761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.470 [2024-07-25 10:56:07.979681] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.470 [2024-07-25 10:56:07.979768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.470 [2024-07-25 10:56:07.979792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.470 [2024-07-25 10:56:07.985500] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.470 [2024-07-25 10:56:07.985589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.471 [2024-07-25 10:56:07.985612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.471 [2024-07-25 10:56:07.991142] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.471 [2024-07-25 10:56:07.991216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.471 [2024-07-25 10:56:07.991238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.471 [2024-07-25 10:56:07.996822] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.471 [2024-07-25 10:56:07.996923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.471 [2024-07-25 10:56:07.996945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.471 [2024-07-25 10:56:08.002675] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.471 [2024-07-25 10:56:08.002751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.471 [2024-07-25 10:56:08.002774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.471 [2024-07-25 10:56:08.008504] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.471 [2024-07-25 10:56:08.008595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.471 [2024-07-25 10:56:08.008618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.471 [2024-07-25 10:56:08.014395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.471 [2024-07-25 10:56:08.014484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.471 [2024-07-25 10:56:08.014506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.471 [2024-07-25 10:56:08.020185] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.471 [2024-07-25 10:56:08.020260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.471 [2024-07-25 10:56:08.020286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.471 [2024-07-25 10:56:08.025899] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.471 [2024-07-25 10:56:08.025975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.471 [2024-07-25 10:56:08.025999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.471 [2024-07-25 10:56:08.031522] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.471 [2024-07-25 10:56:08.031600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.471 [2024-07-25 10:56:08.031624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.471 [2024-07-25 10:56:08.037328] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.471 [2024-07-25 10:56:08.037405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.471 [2024-07-25 10:56:08.037441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.471 [2024-07-25 10:56:08.042989] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.471 [2024-07-25 10:56:08.043067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.471 [2024-07-25 10:56:08.043091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.471 [2024-07-25 10:56:08.048688] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.471 [2024-07-25 10:56:08.048763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.471 [2024-07-25 10:56:08.048786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.471 [2024-07-25 10:56:08.054414] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.471 [2024-07-25 10:56:08.054495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.471 [2024-07-25 10:56:08.054518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.471 [2024-07-25 10:56:08.060051] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.471 [2024-07-25 10:56:08.060126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.471 [2024-07-25 10:56:08.060150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.471 [2024-07-25 10:56:08.065774] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.471 [2024-07-25 10:56:08.065848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.471 [2024-07-25 10:56:08.065871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.471 [2024-07-25 10:56:08.071514] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.471 [2024-07-25 10:56:08.071596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.471 [2024-07-25 10:56:08.071620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.471 [2024-07-25 10:56:08.077254] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.471 [2024-07-25 10:56:08.077344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.471 [2024-07-25 10:56:08.077367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.471 [2024-07-25 10:56:08.082980] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.471 [2024-07-25 10:56:08.083051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.471 [2024-07-25 10:56:08.083074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.471 [2024-07-25 10:56:08.088697] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.471 [2024-07-25 10:56:08.088774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.471 [2024-07-25 10:56:08.088797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.471 [2024-07-25 10:56:08.094499] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.471 [2024-07-25 10:56:08.094578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.471 [2024-07-25 10:56:08.094601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.471 [2024-07-25 10:56:08.100196] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.471 [2024-07-25 10:56:08.100272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.471 [2024-07-25 10:56:08.100295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.471 [2024-07-25 10:56:08.106074] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.471 [2024-07-25 10:56:08.106150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.471 [2024-07-25 10:56:08.106174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.471 [2024-07-25 10:56:08.111701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.471 [2024-07-25 10:56:08.111776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.471 [2024-07-25 10:56:08.111798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.471 [2024-07-25 10:56:08.117436] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.471 [2024-07-25 10:56:08.117510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.471 [2024-07-25 10:56:08.117533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.471 [2024-07-25 10:56:08.123375] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.471 [2024-07-25 10:56:08.123470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.471 [2024-07-25 10:56:08.123493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.471 [2024-07-25 10:56:08.129194] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.471 [2024-07-25 10:56:08.129272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.471 [2024-07-25 10:56:08.129295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.471 [2024-07-25 10:56:08.134987] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.471 [2024-07-25 10:56:08.135060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.471 [2024-07-25 10:56:08.135092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.471 [2024-07-25 10:56:08.140765] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.472 [2024-07-25 10:56:08.140838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.472 [2024-07-25 10:56:08.140861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.472 [2024-07-25 10:56:08.146709] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.472 [2024-07-25 10:56:08.146782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.472 [2024-07-25 10:56:08.146806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.472 [2024-07-25 10:56:08.152444] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.472 [2024-07-25 10:56:08.152518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.472 [2024-07-25 10:56:08.152545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.472 [2024-07-25 10:56:08.158164] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.472 [2024-07-25 10:56:08.158242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.472 [2024-07-25 10:56:08.158267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.472 [2024-07-25 10:56:08.163967] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.472 [2024-07-25 10:56:08.164041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.472 [2024-07-25 10:56:08.164064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.472 [2024-07-25 10:56:08.169722] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.472 [2024-07-25 10:56:08.169797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.472 [2024-07-25 10:56:08.169821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.472 [2024-07-25 10:56:08.175447] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.472 [2024-07-25 10:56:08.175522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.472 [2024-07-25 10:56:08.175563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.472 [2024-07-25 10:56:08.181186] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.472 [2024-07-25 10:56:08.181259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.472 [2024-07-25 10:56:08.181282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.472 [2024-07-25 10:56:08.187049] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.472 [2024-07-25 10:56:08.187141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.472 [2024-07-25 10:56:08.187165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.472 [2024-07-25 10:56:08.192847] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.472 [2024-07-25 10:56:08.192940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.472 [2024-07-25 10:56:08.192963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.763 [2024-07-25 10:56:08.198680] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.763 [2024-07-25 10:56:08.198755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.763 [2024-07-25 10:56:08.198781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.763 [2024-07-25 10:56:08.204476] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.763 [2024-07-25 10:56:08.204564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.763 [2024-07-25 10:56:08.204588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.763 [2024-07-25 10:56:08.210424] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.763 [2024-07-25 10:56:08.210511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.763 [2024-07-25 10:56:08.210536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.763 [2024-07-25 10:56:08.216233] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.763 [2024-07-25 10:56:08.216310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.763 [2024-07-25 10:56:08.216334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.763 [2024-07-25 10:56:08.222003] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.763 [2024-07-25 10:56:08.222083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.763 [2024-07-25 10:56:08.222107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.763 [2024-07-25 10:56:08.227739] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.763 [2024-07-25 10:56:08.227816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.763 [2024-07-25 10:56:08.227839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.763 [2024-07-25 10:56:08.233489] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.763 [2024-07-25 10:56:08.233578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.763 [2024-07-25 10:56:08.233601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.763 [2024-07-25 10:56:08.239334] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.763 [2024-07-25 10:56:08.239424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.763 [2024-07-25 10:56:08.239447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.763 [2024-07-25 10:56:08.245157] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.763 [2024-07-25 10:56:08.245249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.763 [2024-07-25 10:56:08.245271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.763 [2024-07-25 10:56:08.251004] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.763 [2024-07-25 10:56:08.251112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.763 [2024-07-25 10:56:08.251136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.763 [2024-07-25 10:56:08.257000] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.763 [2024-07-25 10:56:08.257101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.763 [2024-07-25 10:56:08.257127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.763 [2024-07-25 10:56:08.262820] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.763 [2024-07-25 10:56:08.262932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.763 [2024-07-25 10:56:08.262956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.763 [2024-07-25 10:56:08.268682] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.763 [2024-07-25 10:56:08.268763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.763 [2024-07-25 10:56:08.268786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.763 [2024-07-25 10:56:08.274391] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.763 [2024-07-25 10:56:08.274478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.763 [2024-07-25 10:56:08.274516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.763 [2024-07-25 10:56:08.280140] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.764 [2024-07-25 10:56:08.280232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.764 [2024-07-25 10:56:08.280256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.764 [2024-07-25 10:56:08.285901] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.764 [2024-07-25 10:56:08.285988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.764 [2024-07-25 10:56:08.286012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.764 [2024-07-25 10:56:08.291536] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.764 [2024-07-25 10:56:08.291624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.764 [2024-07-25 10:56:08.291647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.764 [2024-07-25 10:56:08.297142] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.764 [2024-07-25 10:56:08.297232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.764 [2024-07-25 10:56:08.297254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.764 [2024-07-25 10:56:08.302762] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.764 [2024-07-25 10:56:08.302852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.764 [2024-07-25 10:56:08.302876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.764 [2024-07-25 10:56:08.308368] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.764 [2024-07-25 10:56:08.308462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.764 [2024-07-25 10:56:08.308484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.764 [2024-07-25 10:56:08.314091] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.764 [2024-07-25 10:56:08.314167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.764 [2024-07-25 10:56:08.314190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.764 [2024-07-25 10:56:08.319657] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.764 [2024-07-25 10:56:08.319730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.764 [2024-07-25 10:56:08.319753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.764 [2024-07-25 10:56:08.325261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.764 [2024-07-25 10:56:08.325335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.764 [2024-07-25 10:56:08.325358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.764 [2024-07-25 10:56:08.330972] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.764 [2024-07-25 10:56:08.331046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.764 [2024-07-25 10:56:08.331069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.764 [2024-07-25 10:56:08.336694] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.764 [2024-07-25 10:56:08.336769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.764 [2024-07-25 10:56:08.336793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.764 [2024-07-25 10:56:08.342359] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.764 [2024-07-25 10:56:08.342451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.764 [2024-07-25 10:56:08.342475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.764 [2024-07-25 10:56:08.347997] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.764 [2024-07-25 10:56:08.348075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.764 [2024-07-25 10:56:08.348098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.764 [2024-07-25 10:56:08.353656] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.764 [2024-07-25 10:56:08.353734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.764 [2024-07-25 10:56:08.353756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.764 [2024-07-25 10:56:08.359415] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.764 [2024-07-25 10:56:08.359490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.764 [2024-07-25 10:56:08.359516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.764 [2024-07-25 10:56:08.365076] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.764 [2024-07-25 10:56:08.365150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.764 [2024-07-25 10:56:08.365173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.764 [2024-07-25 10:56:08.370986] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.764 [2024-07-25 10:56:08.371058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.764 [2024-07-25 10:56:08.371081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.764 [2024-07-25 10:56:08.376658] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.764 [2024-07-25 10:56:08.376734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.764 [2024-07-25 10:56:08.376758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.764 [2024-07-25 10:56:08.382332] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.764 [2024-07-25 10:56:08.382410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.764 [2024-07-25 10:56:08.382434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.764 [2024-07-25 10:56:08.388084] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.764 [2024-07-25 10:56:08.388161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.764 [2024-07-25 10:56:08.388184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.764 [2024-07-25 10:56:08.393747] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.764 [2024-07-25 10:56:08.393824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.764 [2024-07-25 10:56:08.393847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.764 [2024-07-25 10:56:08.399505] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.764 [2024-07-25 10:56:08.399596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.764 [2024-07-25 10:56:08.399619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.764 [2024-07-25 10:56:08.405283] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.764 [2024-07-25 10:56:08.405355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.764 [2024-07-25 10:56:08.405379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.764 [2024-07-25 10:56:08.411090] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.764 [2024-07-25 10:56:08.411170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.764 [2024-07-25 10:56:08.411193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.764 [2024-07-25 10:56:08.416882] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.764 [2024-07-25 10:56:08.416970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.764 [2024-07-25 10:56:08.416993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.764 [2024-07-25 10:56:08.422727] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.764 [2024-07-25 10:56:08.422809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.764 [2024-07-25 10:56:08.422832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.764 [2024-07-25 10:56:08.428549] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.764 [2024-07-25 10:56:08.428632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.764 [2024-07-25 10:56:08.428655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.764 [2024-07-25 10:56:08.434191] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.765 [2024-07-25 10:56:08.434269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.765 [2024-07-25 10:56:08.434293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.765 [2024-07-25 10:56:08.439799] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.765 [2024-07-25 10:56:08.439899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.765 [2024-07-25 10:56:08.439923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.765 [2024-07-25 10:56:08.445496] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.765 [2024-07-25 10:56:08.445572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.765 [2024-07-25 10:56:08.445596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.765 [2024-07-25 10:56:08.451181] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.765 [2024-07-25 10:56:08.451253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.765 [2024-07-25 10:56:08.451278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.765 [2024-07-25 10:56:08.456791] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.765 [2024-07-25 10:56:08.456923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.765 [2024-07-25 10:56:08.456946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.765 [2024-07-25 10:56:08.462453] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.765 [2024-07-25 10:56:08.462534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.765 [2024-07-25 10:56:08.462568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.765 [2024-07-25 10:56:08.468230] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.765 [2024-07-25 10:56:08.468301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.765 [2024-07-25 10:56:08.468324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.765 [2024-07-25 10:56:08.473978] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.765 [2024-07-25 10:56:08.474071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.765 [2024-07-25 10:56:08.474094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.765 [2024-07-25 10:56:08.479739] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.765 [2024-07-25 10:56:08.479827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.765 [2024-07-25 10:56:08.479852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.765 [2024-07-25 10:56:08.486041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.765 [2024-07-25 10:56:08.486128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.765 [2024-07-25 10:56:08.486152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.765 [2024-07-25 10:56:08.491798] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.765 [2024-07-25 10:56:08.491891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.765 [2024-07-25 10:56:08.491929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.765 [2024-07-25 10:56:08.497635] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:38.765 [2024-07-25 10:56:08.497708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.765 [2024-07-25 10:56:08.497732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.024 [2024-07-25 10:56:08.503300] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.024 [2024-07-25 10:56:08.503375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.024 [2024-07-25 10:56:08.503400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.024 [2024-07-25 10:56:08.509110] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.024 [2024-07-25 10:56:08.509182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.024 [2024-07-25 10:56:08.509206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.024 [2024-07-25 10:56:08.514878] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.024 [2024-07-25 10:56:08.514955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.024 [2024-07-25 10:56:08.514978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.024 [2024-07-25 10:56:08.520613] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.024 [2024-07-25 10:56:08.520687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.024 [2024-07-25 10:56:08.520710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.024 [2024-07-25 10:56:08.526294] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.024 [2024-07-25 10:56:08.526369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.024 [2024-07-25 10:56:08.526393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.024 [2024-07-25 10:56:08.532062] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.024 [2024-07-25 10:56:08.532152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.024 [2024-07-25 10:56:08.532174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.024 [2024-07-25 10:56:08.537925] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.024 [2024-07-25 10:56:08.537994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.024 [2024-07-25 10:56:08.538016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.024 [2024-07-25 10:56:08.543664] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.024 [2024-07-25 10:56:08.543739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.024 [2024-07-25 10:56:08.543763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.024 [2024-07-25 10:56:08.549330] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.024 [2024-07-25 10:56:08.549407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.024 [2024-07-25 10:56:08.549429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.024 [2024-07-25 10:56:08.555047] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.024 [2024-07-25 10:56:08.555138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.024 [2024-07-25 10:56:08.555167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.024 [2024-07-25 10:56:08.560796] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.024 [2024-07-25 10:56:08.560915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.024 [2024-07-25 10:56:08.560938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.024 [2024-07-25 10:56:08.566499] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.024 [2024-07-25 10:56:08.566570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.024 [2024-07-25 10:56:08.566595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.024 [2024-07-25 10:56:08.572221] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.024 [2024-07-25 10:56:08.572296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.024 [2024-07-25 10:56:08.572319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.024 [2024-07-25 10:56:08.577998] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.024 [2024-07-25 10:56:08.578096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.024 [2024-07-25 10:56:08.578121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.024 [2024-07-25 10:56:08.583681] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.024 [2024-07-25 10:56:08.583756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.024 [2024-07-25 10:56:08.583779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.024 [2024-07-25 10:56:08.589305] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.024 [2024-07-25 10:56:08.589381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.024 [2024-07-25 10:56:08.589404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.024 [2024-07-25 10:56:08.595041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.024 [2024-07-25 10:56:08.595132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.024 [2024-07-25 10:56:08.595154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.024 [2024-07-25 10:56:08.600921] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.024 [2024-07-25 10:56:08.601030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.024 [2024-07-25 10:56:08.601053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.024 [2024-07-25 10:56:08.606745] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.024 [2024-07-25 10:56:08.606832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.024 [2024-07-25 10:56:08.606855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.024 [2024-07-25 10:56:08.612537] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.024 [2024-07-25 10:56:08.612619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.024 [2024-07-25 10:56:08.612641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.024 [2024-07-25 10:56:08.618298] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.024 [2024-07-25 10:56:08.618380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.024 [2024-07-25 10:56:08.618405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.024 [2024-07-25 10:56:08.624119] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.024 [2024-07-25 10:56:08.624197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.024 [2024-07-25 10:56:08.624219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.024 [2024-07-25 10:56:08.629890] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.024 [2024-07-25 10:56:08.629962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.024 [2024-07-25 10:56:08.629984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.025 [2024-07-25 10:56:08.635604] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.025 [2024-07-25 10:56:08.635677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.025 [2024-07-25 10:56:08.635700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.025 [2024-07-25 10:56:08.641206] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.025 [2024-07-25 10:56:08.641278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.025 [2024-07-25 10:56:08.641300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.025 [2024-07-25 10:56:08.646840] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.025 [2024-07-25 10:56:08.646946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.025 [2024-07-25 10:56:08.646968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.025 [2024-07-25 10:56:08.652603] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.025 [2024-07-25 10:56:08.652678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.025 [2024-07-25 10:56:08.652703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.025 [2024-07-25 10:56:08.658339] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.025 [2024-07-25 10:56:08.658422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.025 [2024-07-25 10:56:08.658445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.025 [2024-07-25 10:56:08.664182] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.025 [2024-07-25 10:56:08.664254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.025 [2024-07-25 10:56:08.664278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.025 [2024-07-25 10:56:08.669833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.025 [2024-07-25 10:56:08.669925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.025 [2024-07-25 10:56:08.669949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.025 [2024-07-25 10:56:08.675436] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.025 [2024-07-25 10:56:08.675524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.025 [2024-07-25 10:56:08.675546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.025 [2024-07-25 10:56:08.681076] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.025 [2024-07-25 10:56:08.681152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.025 [2024-07-25 10:56:08.681174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.025 [2024-07-25 10:56:08.686700] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.025 [2024-07-25 10:56:08.686775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.025 [2024-07-25 10:56:08.686797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.025 [2024-07-25 10:56:08.692300] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.025 [2024-07-25 10:56:08.692373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.025 [2024-07-25 10:56:08.692397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.025 [2024-07-25 10:56:08.697890] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.025 [2024-07-25 10:56:08.697963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.025 [2024-07-25 10:56:08.697986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.025 [2024-07-25 10:56:08.703604] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.025 [2024-07-25 10:56:08.703685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.025 [2024-07-25 10:56:08.703708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.025 [2024-07-25 10:56:08.709293] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.025 [2024-07-25 10:56:08.709367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.025 [2024-07-25 10:56:08.709390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.025 [2024-07-25 10:56:08.714945] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.025 [2024-07-25 10:56:08.715037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.025 [2024-07-25 10:56:08.715060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.025 [2024-07-25 10:56:08.720674] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.025 [2024-07-25 10:56:08.720752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.025 [2024-07-25 10:56:08.720775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.025 [2024-07-25 10:56:08.726293] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.025 [2024-07-25 10:56:08.726393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.025 [2024-07-25 10:56:08.726416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.025 [2024-07-25 10:56:08.731932] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.025 [2024-07-25 10:56:08.732011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.025 [2024-07-25 10:56:08.732033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.025 [2024-07-25 10:56:08.737697] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.025 [2024-07-25 10:56:08.737774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.025 [2024-07-25 10:56:08.737798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.025 [2024-07-25 10:56:08.743527] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.025 [2024-07-25 10:56:08.743608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.025 [2024-07-25 10:56:08.743632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.025 [2024-07-25 10:56:08.749367] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.025 [2024-07-25 10:56:08.749449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.025 [2024-07-25 10:56:08.749471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.025 [2024-07-25 10:56:08.755135] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.025 [2024-07-25 10:56:08.755224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.025 [2024-07-25 10:56:08.755247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.025 [2024-07-25 10:56:08.760769] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.025 [2024-07-25 10:56:08.760865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.025 [2024-07-25 10:56:08.760891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.287 [2024-07-25 10:56:08.766492] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.287 [2024-07-25 10:56:08.766583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.287 [2024-07-25 10:56:08.766608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.287 [2024-07-25 10:56:08.772224] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.287 [2024-07-25 10:56:08.772300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.287 [2024-07-25 10:56:08.772324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.287 [2024-07-25 10:56:08.777762] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.287 [2024-07-25 10:56:08.777846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.287 [2024-07-25 10:56:08.777884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.287 [2024-07-25 10:56:08.783532] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.287 [2024-07-25 10:56:08.783638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.287 [2024-07-25 10:56:08.783661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.287 [2024-07-25 10:56:08.789212] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.287 [2024-07-25 10:56:08.789295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.287 [2024-07-25 10:56:08.789318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.287 [2024-07-25 10:56:08.794968] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.287 [2024-07-25 10:56:08.795043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.287 [2024-07-25 10:56:08.795066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.287 [2024-07-25 10:56:08.800704] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.287 [2024-07-25 10:56:08.800781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.287 [2024-07-25 10:56:08.800805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.287 [2024-07-25 10:56:08.806570] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.287 [2024-07-25 10:56:08.806651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.287 [2024-07-25 10:56:08.806675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.287 [2024-07-25 10:56:08.812307] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.287 [2024-07-25 10:56:08.812397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.287 [2024-07-25 10:56:08.812420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.287 [2024-07-25 10:56:08.818228] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.287 [2024-07-25 10:56:08.818302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.287 [2024-07-25 10:56:08.818327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.287 [2024-07-25 10:56:08.823918] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.287 [2024-07-25 10:56:08.824005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.287 [2024-07-25 10:56:08.824028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.287 [2024-07-25 10:56:08.829644] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.287 [2024-07-25 10:56:08.829721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.287 [2024-07-25 10:56:08.829746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.287 [2024-07-25 10:56:08.835421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.287 [2024-07-25 10:56:08.835512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.287 [2024-07-25 10:56:08.835536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.287 [2024-07-25 10:56:08.841146] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.287 [2024-07-25 10:56:08.841219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.287 [2024-07-25 10:56:08.841242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.287 [2024-07-25 10:56:08.846870] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.287 [2024-07-25 10:56:08.846966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.287 [2024-07-25 10:56:08.846988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.287 [2024-07-25 10:56:08.852578] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.287 [2024-07-25 10:56:08.852667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.287 [2024-07-25 10:56:08.852690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.287 [2024-07-25 10:56:08.858221] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.287 [2024-07-25 10:56:08.858313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.287 [2024-07-25 10:56:08.858337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.287 [2024-07-25 10:56:08.863942] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.287 [2024-07-25 10:56:08.864040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.287 [2024-07-25 10:56:08.864063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.287 [2024-07-25 10:56:08.869681] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.287 [2024-07-25 10:56:08.869772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.287 [2024-07-25 10:56:08.869796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.287 [2024-07-25 10:56:08.875395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.287 [2024-07-25 10:56:08.875474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.287 [2024-07-25 10:56:08.875498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.287 [2024-07-25 10:56:08.881135] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.287 [2024-07-25 10:56:08.881225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.287 [2024-07-25 10:56:08.881249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.287 [2024-07-25 10:56:08.886990] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.287 [2024-07-25 10:56:08.887081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.287 [2024-07-25 10:56:08.887103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.287 [2024-07-25 10:56:08.892711] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.287 [2024-07-25 10:56:08.892797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.287 [2024-07-25 10:56:08.892821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.287 [2024-07-25 10:56:08.898557] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.287 [2024-07-25 10:56:08.898633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.287 [2024-07-25 10:56:08.898657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.287 [2024-07-25 10:56:08.904314] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.287 [2024-07-25 10:56:08.904398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.287 [2024-07-25 10:56:08.904420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.287 [2024-07-25 10:56:08.910136] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.288 [2024-07-25 10:56:08.910213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.288 [2024-07-25 10:56:08.910236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.288 [2024-07-25 10:56:08.915790] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.288 [2024-07-25 10:56:08.915982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.288 [2024-07-25 10:56:08.916014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.288 [2024-07-25 10:56:08.921452] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.288 [2024-07-25 10:56:08.921541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.288 [2024-07-25 10:56:08.921564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.288 [2024-07-25 10:56:08.927242] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.288 [2024-07-25 10:56:08.927330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.288 [2024-07-25 10:56:08.927354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.288 [2024-07-25 10:56:08.932987] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.288 [2024-07-25 10:56:08.933069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.288 [2024-07-25 10:56:08.933092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.288 [2024-07-25 10:56:08.938718] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.288 [2024-07-25 10:56:08.938798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.288 [2024-07-25 10:56:08.938821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.288 [2024-07-25 10:56:08.944518] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.288 [2024-07-25 10:56:08.944627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.288 [2024-07-25 10:56:08.944651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.288 [2024-07-25 10:56:08.950382] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.288 [2024-07-25 10:56:08.950482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.288 [2024-07-25 10:56:08.950504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.288 [2024-07-25 10:56:08.956226] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.288 [2024-07-25 10:56:08.956347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.288 [2024-07-25 10:56:08.956369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.288 [2024-07-25 10:56:08.961988] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.288 [2024-07-25 10:56:08.962101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.288 [2024-07-25 10:56:08.962124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.288 [2024-07-25 10:56:08.967857] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.288 [2024-07-25 10:56:08.967956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.288 [2024-07-25 10:56:08.967978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.288 [2024-07-25 10:56:08.973723] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.288 [2024-07-25 10:56:08.973808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.288 [2024-07-25 10:56:08.973831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.288 [2024-07-25 10:56:08.979470] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.288 [2024-07-25 10:56:08.979563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.288 [2024-07-25 10:56:08.979587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.288 [2024-07-25 10:56:08.985198] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.288 [2024-07-25 10:56:08.985298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.288 [2024-07-25 10:56:08.985321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.288 [2024-07-25 10:56:08.991076] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.288 [2024-07-25 10:56:08.991183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.288 [2024-07-25 10:56:08.991206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.288 [2024-07-25 10:56:08.996788] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.288 [2024-07-25 10:56:08.996904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.288 [2024-07-25 10:56:08.996928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.288 [2024-07-25 10:56:09.002446] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.288 [2024-07-25 10:56:09.002548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.288 [2024-07-25 10:56:09.002570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.288 [2024-07-25 10:56:09.008117] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.288 [2024-07-25 10:56:09.008196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.288 [2024-07-25 10:56:09.008221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.288 [2024-07-25 10:56:09.013753] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.288 [2024-07-25 10:56:09.013840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.288 [2024-07-25 10:56:09.013876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.288 [2024-07-25 10:56:09.019426] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.288 [2024-07-25 10:56:09.019503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.288 [2024-07-25 10:56:09.019543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.548 [2024-07-25 10:56:09.025269] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.548 [2024-07-25 10:56:09.025375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.548 [2024-07-25 10:56:09.025400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.548 [2024-07-25 10:56:09.031134] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.548 [2024-07-25 10:56:09.031243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.548 [2024-07-25 10:56:09.031267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.548 [2024-07-25 10:56:09.036940] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.548 [2024-07-25 10:56:09.037010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.548 [2024-07-25 10:56:09.037033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.548 [2024-07-25 10:56:09.042714] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.548 [2024-07-25 10:56:09.042791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.548 [2024-07-25 10:56:09.042814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.548 [2024-07-25 10:56:09.048444] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.548 [2024-07-25 10:56:09.048519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.548 [2024-07-25 10:56:09.048557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.548 [2024-07-25 10:56:09.054187] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.548 [2024-07-25 10:56:09.054290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.548 [2024-07-25 10:56:09.054314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.548 [2024-07-25 10:56:09.059988] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.548 [2024-07-25 10:56:09.060061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.548 [2024-07-25 10:56:09.060083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.548 [2024-07-25 10:56:09.065730] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.548 [2024-07-25 10:56:09.065807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.548 [2024-07-25 10:56:09.065831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.548 [2024-07-25 10:56:09.071627] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.548 [2024-07-25 10:56:09.071707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.548 [2024-07-25 10:56:09.071731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.548 [2024-07-25 10:56:09.077323] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.548 [2024-07-25 10:56:09.077398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.548 [2024-07-25 10:56:09.077422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.548 [2024-07-25 10:56:09.083194] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.548 [2024-07-25 10:56:09.083265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.548 [2024-07-25 10:56:09.083288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.548 [2024-07-25 10:56:09.088955] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.548 [2024-07-25 10:56:09.089059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.548 [2024-07-25 10:56:09.089085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.548 [2024-07-25 10:56:09.094691] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.548 [2024-07-25 10:56:09.094767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.548 [2024-07-25 10:56:09.094790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.548 [2024-07-25 10:56:09.100488] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.548 [2024-07-25 10:56:09.100573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.548 [2024-07-25 10:56:09.100596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.548 [2024-07-25 10:56:09.106207] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.548 [2024-07-25 10:56:09.106280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.548 [2024-07-25 10:56:09.106304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.548 [2024-07-25 10:56:09.112034] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.548 [2024-07-25 10:56:09.112115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.548 [2024-07-25 10:56:09.112139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.548 [2024-07-25 10:56:09.117734] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.548 [2024-07-25 10:56:09.117815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.548 [2024-07-25 10:56:09.117838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.548 [2024-07-25 10:56:09.123469] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.548 [2024-07-25 10:56:09.123555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.548 [2024-07-25 10:56:09.123577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.548 [2024-07-25 10:56:09.129177] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.548 [2024-07-25 10:56:09.129268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.548 [2024-07-25 10:56:09.129292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.548 [2024-07-25 10:56:09.134822] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.548 [2024-07-25 10:56:09.134924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.548 [2024-07-25 10:56:09.134947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.548 [2024-07-25 10:56:09.140485] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.548 [2024-07-25 10:56:09.140578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.548 [2024-07-25 10:56:09.140601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.548 [2024-07-25 10:56:09.146142] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.548 [2024-07-25 10:56:09.146229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.549 [2024-07-25 10:56:09.146253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.549 [2024-07-25 10:56:09.151767] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.549 [2024-07-25 10:56:09.151866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.549 [2024-07-25 10:56:09.151888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.549 [2024-07-25 10:56:09.157498] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.549 [2024-07-25 10:56:09.157603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.549 [2024-07-25 10:56:09.157625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.549 [2024-07-25 10:56:09.163174] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.549 [2024-07-25 10:56:09.163263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.549 [2024-07-25 10:56:09.163285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.549 [2024-07-25 10:56:09.168829] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.549 [2024-07-25 10:56:09.168926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.549 [2024-07-25 10:56:09.168949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.549 [2024-07-25 10:56:09.174505] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.549 [2024-07-25 10:56:09.174599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.549 [2024-07-25 10:56:09.174622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.549 [2024-07-25 10:56:09.180162] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.549 [2024-07-25 10:56:09.180250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.549 [2024-07-25 10:56:09.180273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.549 [2024-07-25 10:56:09.185819] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.549 [2024-07-25 10:56:09.185917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.549 [2024-07-25 10:56:09.185939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.549 [2024-07-25 10:56:09.191534] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.549 [2024-07-25 10:56:09.191623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.549 [2024-07-25 10:56:09.191648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.549 [2024-07-25 10:56:09.197247] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.549 [2024-07-25 10:56:09.197338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.549 [2024-07-25 10:56:09.197361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.549 [2024-07-25 10:56:09.203008] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.549 [2024-07-25 10:56:09.203086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.549 [2024-07-25 10:56:09.203109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.549 [2024-07-25 10:56:09.208599] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.549 [2024-07-25 10:56:09.208689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.549 [2024-07-25 10:56:09.208711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.549 [2024-07-25 10:56:09.214287] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.549 [2024-07-25 10:56:09.214380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.549 [2024-07-25 10:56:09.214418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.549 [2024-07-25 10:56:09.220044] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.549 [2024-07-25 10:56:09.220135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.549 [2024-07-25 10:56:09.220158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.549 [2024-07-25 10:56:09.225687] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.549 [2024-07-25 10:56:09.225763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.549 [2024-07-25 10:56:09.225785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.549 [2024-07-25 10:56:09.231394] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.549 [2024-07-25 10:56:09.231483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.549 [2024-07-25 10:56:09.231507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.549 [2024-07-25 10:56:09.237116] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.549 [2024-07-25 10:56:09.237221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.549 [2024-07-25 10:56:09.237243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.549 [2024-07-25 10:56:09.242831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.549 [2024-07-25 10:56:09.242932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.549 [2024-07-25 10:56:09.242954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.549 [2024-07-25 10:56:09.248492] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.549 [2024-07-25 10:56:09.248594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.549 [2024-07-25 10:56:09.248616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.549 [2024-07-25 10:56:09.254243] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.549 [2024-07-25 10:56:09.254319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.549 [2024-07-25 10:56:09.254342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.549 [2024-07-25 10:56:09.259807] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.549 [2024-07-25 10:56:09.259922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.549 [2024-07-25 10:56:09.259946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.549 [2024-07-25 10:56:09.265425] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.549 [2024-07-25 10:56:09.265522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.549 [2024-07-25 10:56:09.265546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.549 [2024-07-25 10:56:09.271277] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.549 [2024-07-25 10:56:09.271371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.549 [2024-07-25 10:56:09.271395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.549 [2024-07-25 10:56:09.276802] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.549 [2024-07-25 10:56:09.276902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.549 [2024-07-25 10:56:09.276926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.549 [2024-07-25 10:56:09.282642] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.549 [2024-07-25 10:56:09.282722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.549 [2024-07-25 10:56:09.282746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.808 [2024-07-25 10:56:09.288273] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.808 [2024-07-25 10:56:09.288392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.808 [2024-07-25 10:56:09.288415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.808 [2024-07-25 10:56:09.293982] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.808 [2024-07-25 10:56:09.294096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.808 [2024-07-25 10:56:09.294123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.808 [2024-07-25 10:56:09.299698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.808 [2024-07-25 10:56:09.299803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.808 [2024-07-25 10:56:09.299826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.808 [2024-07-25 10:56:09.305369] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.808 [2024-07-25 10:56:09.305465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.808 [2024-07-25 10:56:09.305488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.808 [2024-07-25 10:56:09.311305] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.808 [2024-07-25 10:56:09.311393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.808 [2024-07-25 10:56:09.311416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.808 [2024-07-25 10:56:09.317096] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.808 [2024-07-25 10:56:09.317199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.808 [2024-07-25 10:56:09.317221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.808 [2024-07-25 10:56:09.322991] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.808 [2024-07-25 10:56:09.323069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.808 [2024-07-25 10:56:09.323091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.808 [2024-07-25 10:56:09.328555] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.808 [2024-07-25 10:56:09.328639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.808 [2024-07-25 10:56:09.328662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.808 [2024-07-25 10:56:09.334142] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.808 [2024-07-25 10:56:09.334230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.808 [2024-07-25 10:56:09.334253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.808 [2024-07-25 10:56:09.339840] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.808 [2024-07-25 10:56:09.339959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.808 [2024-07-25 10:56:09.339982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.808 [2024-07-25 10:56:09.345493] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.808 [2024-07-25 10:56:09.345583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.808 [2024-07-25 10:56:09.345607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.808 [2024-07-25 10:56:09.351299] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.808 [2024-07-25 10:56:09.351388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.808 [2024-07-25 10:56:09.351411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.808 [2024-07-25 10:56:09.357084] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.808 [2024-07-25 10:56:09.357173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.808 [2024-07-25 10:56:09.357195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.808 [2024-07-25 10:56:09.363024] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.808 [2024-07-25 10:56:09.363125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.808 [2024-07-25 10:56:09.363148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.808 [2024-07-25 10:56:09.368686] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.808 [2024-07-25 10:56:09.368773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.808 [2024-07-25 10:56:09.368796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.808 [2024-07-25 10:56:09.374451] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.808 [2024-07-25 10:56:09.374558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.808 [2024-07-25 10:56:09.374581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.808 [2024-07-25 10:56:09.380329] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.808 [2024-07-25 10:56:09.380417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.808 [2024-07-25 10:56:09.380442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.808 [2024-07-25 10:56:09.386113] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.808 [2024-07-25 10:56:09.386193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.808 [2024-07-25 10:56:09.386217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.808 [2024-07-25 10:56:09.391788] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.808 [2024-07-25 10:56:09.391875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.808 [2024-07-25 10:56:09.391899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.808 [2024-07-25 10:56:09.397476] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.808 [2024-07-25 10:56:09.397551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.808 [2024-07-25 10:56:09.397577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.808 [2024-07-25 10:56:09.403133] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.808 [2024-07-25 10:56:09.403226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.808 [2024-07-25 10:56:09.403248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.808 [2024-07-25 10:56:09.408690] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.808 [2024-07-25 10:56:09.408767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.808 [2024-07-25 10:56:09.408789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.809 [2024-07-25 10:56:09.414302] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.809 [2024-07-25 10:56:09.414393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.809 [2024-07-25 10:56:09.414415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.809 [2024-07-25 10:56:09.420001] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xef1080) with pdu=0x2000190fef90 00:17:39.809 [2024-07-25 10:56:09.420076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.809 [2024-07-25 10:56:09.420099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.809 00:17:39.809 Latency(us) 00:17:39.809 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.809 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:39.809 nvme0n1 : 2.00 5338.79 667.35 0.00 0.00 2990.83 2249.08 10843.23 00:17:39.809 =================================================================================================================== 00:17:39.809 Total : 5338.79 667.35 0.00 0.00 2990.83 2249.08 10843.23 00:17:39.809 0 00:17:39.809 10:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:39.809 10:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:39.809 10:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:39.809 10:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:39.809 | .driver_specific 00:17:39.809 | .nvme_error 00:17:39.809 | .status_code 00:17:39.809 | .command_transient_transport_error' 00:17:40.067 10:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 344 > 0 )) 00:17:40.067 10:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80099 00:17:40.067 10:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 80099 ']' 00:17:40.067 10:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 80099 00:17:40.067 10:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:17:40.067 10:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:40.067 10:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80099 00:17:40.067 10:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:40.067 killing process with pid 80099 00:17:40.067 Received shutdown signal, test time was about 2.000000 seconds 00:17:40.067 00:17:40.067 Latency(us) 00:17:40.067 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.067 =================================================================================================================== 00:17:40.067 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:40.067 10:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:40.067 10:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80099' 00:17:40.067 10:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 80099 00:17:40.067 10:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 80099 00:17:40.631 10:56:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 79885 00:17:40.631 10:56:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 79885 ']' 00:17:40.631 10:56:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 79885 00:17:40.631 10:56:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:17:40.631 10:56:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:40.632 10:56:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79885 00:17:40.632 killing process with pid 79885 00:17:40.632 10:56:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:40.632 10:56:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:40.632 10:56:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79885' 00:17:40.632 10:56:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 79885 00:17:40.632 10:56:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 79885 00:17:40.632 ************************************ 00:17:40.632 END TEST nvmf_digest_error 00:17:40.632 ************************************ 00:17:40.632 00:17:40.632 real 0m18.960s 00:17:40.632 user 0m35.854s 00:17:40.632 sys 0m5.605s 00:17:40.632 10:56:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:40.632 10:56:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:40.889 10:56:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:17:40.889 10:56:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:17:40.889 10:56:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:40.889 10:56:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:17:40.889 10:56:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:40.889 10:56:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:17:40.889 10:56:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:40.889 10:56:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:40.889 rmmod nvme_tcp 00:17:40.889 rmmod nvme_fabrics 00:17:40.889 rmmod nvme_keyring 00:17:40.889 10:56:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:40.889 10:56:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:17:40.889 10:56:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:17:40.889 10:56:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 79885 ']' 00:17:40.890 10:56:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 79885 00:17:40.890 10:56:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 79885 ']' 00:17:40.890 10:56:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 79885 00:17:40.890 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (79885) - No such process 00:17:40.890 Process with pid 79885 is not found 00:17:40.890 10:56:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 79885 is not found' 00:17:40.890 10:56:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:40.890 10:56:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:40.890 10:56:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:40.890 10:56:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:40.890 10:56:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:40.890 10:56:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.890 10:56:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:40.890 10:56:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.890 10:56:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:40.890 ************************************ 00:17:40.890 END TEST nvmf_digest 00:17:40.890 ************************************ 00:17:40.890 00:17:40.890 real 0m39.068s 00:17:40.890 user 1m12.858s 00:17:40.890 sys 0m11.301s 00:17:40.890 10:56:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:40.890 10:56:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:40.890 10:56:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:17:40.890 10:56:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:17:40.890 10:56:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:40.890 10:56:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:40.890 10:56:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:40.890 10:56:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.890 ************************************ 00:17:40.890 START TEST nvmf_host_multipath 00:17:40.890 ************************************ 00:17:40.890 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:41.148 * Looking for test storage... 00:17:41.148 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:41.148 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:41.148 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:41.148 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:41.148 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:41.148 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:41.148 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:41.148 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:41.148 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:41.148 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:41.148 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:41.148 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:41.148 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:41.148 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:17:41.148 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=bb4b8bd3-cfb4-4368-bf29-91254747069c 00:17:41.148 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:41.148 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:41.148 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:41.148 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:41.148 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:41.148 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:41.148 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:41.148 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:41.149 Cannot find device "nvmf_tgt_br" 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:41.149 Cannot find device "nvmf_tgt_br2" 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:41.149 Cannot find device "nvmf_tgt_br" 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:41.149 Cannot find device "nvmf_tgt_br2" 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:41.149 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:41.149 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:41.149 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:41.406 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:41.406 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:41.406 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:41.406 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:41.406 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:41.407 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:41.407 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:41.407 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:41.407 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:41.407 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:41.407 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:41.407 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:41.407 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:41.407 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:41.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:41.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:17:41.407 00:17:41.407 --- 10.0.0.2 ping statistics --- 00:17:41.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.407 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:17:41.407 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:41.407 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:41.407 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:17:41.407 00:17:41.407 --- 10.0.0.3 ping statistics --- 00:17:41.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.407 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:17:41.407 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:41.407 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:41.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:17:41.407 00:17:41.407 --- 10.0.0.1 ping statistics --- 00:17:41.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.407 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:17:41.407 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:41.407 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:17:41.407 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:41.407 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:41.407 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:41.407 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:41.407 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:41.407 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:41.407 10:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:41.407 10:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:17:41.407 10:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:41.407 10:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:41.407 10:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:41.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.407 10:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=80372 00:17:41.407 10:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:41.407 10:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 80372 00:17:41.407 10:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 80372 ']' 00:17:41.407 10:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.407 10:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:41.407 10:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.407 10:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:41.407 10:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:41.407 [2024-07-25 10:56:11.079640] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:41.407 [2024-07-25 10:56:11.080021] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:41.665 [2024-07-25 10:56:11.227995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:41.665 [2024-07-25 10:56:11.355823] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:41.665 [2024-07-25 10:56:11.356150] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:41.665 [2024-07-25 10:56:11.356416] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:41.665 [2024-07-25 10:56:11.356586] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:41.665 [2024-07-25 10:56:11.356687] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:41.665 [2024-07-25 10:56:11.356971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:41.665 [2024-07-25 10:56:11.356983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.923 [2024-07-25 10:56:11.411885] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:42.490 10:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:42.490 10:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:17:42.490 10:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:42.490 10:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:42.490 10:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:42.490 10:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:42.490 10:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80372 00:17:42.490 10:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:42.749 [2024-07-25 10:56:12.334722] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:42.749 10:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:43.008 Malloc0 00:17:43.008 10:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:17:43.266 10:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:43.525 10:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:43.783 [2024-07-25 10:56:13.409718] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:43.783 10:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:44.065 [2024-07-25 10:56:13.649816] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:44.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:44.065 10:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=80428 00:17:44.065 10:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:17:44.065 10:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:44.065 10:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 80428 /var/tmp/bdevperf.sock 00:17:44.065 10:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 80428 ']' 00:17:44.065 10:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:44.065 10:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:44.065 10:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:44.065 10:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:44.065 10:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:45.013 10:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:45.013 10:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:17:45.013 10:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:45.272 10:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:17:45.839 Nvme0n1 00:17:45.839 10:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:46.097 Nvme0n1 00:17:46.097 10:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:17:46.097 10:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:17:47.044 10:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:17:47.044 10:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:47.303 10:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:47.561 10:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:17:47.561 10:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80473 00:17:47.561 10:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:47.561 10:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80372 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:54.128 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:54.128 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:54.128 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:17:54.128 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:54.128 Attaching 4 probes... 00:17:54.128 @path[10.0.0.2, 4421]: 16612 00:17:54.128 @path[10.0.0.2, 4421]: 16991 00:17:54.128 @path[10.0.0.2, 4421]: 17035 00:17:54.128 @path[10.0.0.2, 4421]: 17248 00:17:54.128 @path[10.0.0.2, 4421]: 17261 00:17:54.128 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:54.128 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:54.128 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:54.128 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:17:54.128 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:54.128 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:54.128 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80473 00:17:54.128 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:54.128 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:17:54.128 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:54.128 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:54.386 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:17:54.386 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80586 00:17:54.386 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:54.386 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80372 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:00.953 10:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:00.953 10:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:00.953 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:00.953 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:00.953 Attaching 4 probes... 00:18:00.953 @path[10.0.0.2, 4420]: 15937 00:18:00.953 @path[10.0.0.2, 4420]: 15479 00:18:00.953 @path[10.0.0.2, 4420]: 14936 00:18:00.953 @path[10.0.0.2, 4420]: 15161 00:18:00.953 @path[10.0.0.2, 4420]: 15400 00:18:00.953 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:00.953 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:00.953 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:00.953 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:00.953 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:00.953 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:00.953 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80586 00:18:00.953 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:00.953 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:18:00.953 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:00.953 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:01.213 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:18:01.213 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80372 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:01.213 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80698 00:18:01.213 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:07.781 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:07.781 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:07.781 10:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:07.781 10:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:07.781 Attaching 4 probes... 00:18:07.781 @path[10.0.0.2, 4421]: 14586 00:18:07.781 @path[10.0.0.2, 4421]: 17072 00:18:07.781 @path[10.0.0.2, 4421]: 15759 00:18:07.781 @path[10.0.0.2, 4421]: 16281 00:18:07.781 @path[10.0.0.2, 4421]: 16766 00:18:07.781 10:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:07.781 10:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:07.781 10:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:07.781 10:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:07.781 10:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:07.781 10:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:07.781 10:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80698 00:18:07.781 10:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:07.781 10:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:18:07.781 10:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:07.781 10:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:07.781 10:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:18:07.781 10:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80372 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:07.781 10:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80817 00:18:07.781 10:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:14.345 10:56:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:14.345 10:56:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:18:14.345 10:56:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:18:14.345 10:56:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:14.345 Attaching 4 probes... 00:18:14.345 00:18:14.345 00:18:14.345 00:18:14.345 00:18:14.345 00:18:14.345 10:56:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:14.345 10:56:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:14.345 10:56:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:14.345 10:56:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:18:14.345 10:56:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:18:14.345 10:56:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:18:14.345 10:56:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80817 00:18:14.345 10:56:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:14.345 10:56:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:18:14.345 10:56:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:14.603 10:56:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:14.862 10:56:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:18:14.862 10:56:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80930 00:18:14.862 10:56:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80372 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:14.862 10:56:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:21.426 10:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:21.426 10:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:21.426 10:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:21.426 10:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:21.426 Attaching 4 probes... 00:18:21.426 @path[10.0.0.2, 4421]: 17209 00:18:21.426 @path[10.0.0.2, 4421]: 16659 00:18:21.426 @path[10.0.0.2, 4421]: 16434 00:18:21.426 @path[10.0.0.2, 4421]: 16907 00:18:21.426 @path[10.0.0.2, 4421]: 16752 00:18:21.426 10:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:21.426 10:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:21.426 10:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:21.426 10:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:21.426 10:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:21.426 10:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:21.426 10:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80930 00:18:21.426 10:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:21.426 10:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:21.426 10:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:18:22.362 10:56:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:18:22.362 10:56:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81053 00:18:22.362 10:56:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:22.362 10:56:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80372 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:28.926 10:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:28.926 10:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:28.926 10:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:28.926 10:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:28.926 Attaching 4 probes... 00:18:28.926 @path[10.0.0.2, 4420]: 15787 00:18:28.926 @path[10.0.0.2, 4420]: 15396 00:18:28.926 @path[10.0.0.2, 4420]: 15634 00:18:28.926 @path[10.0.0.2, 4420]: 15571 00:18:28.926 @path[10.0.0.2, 4420]: 15416 00:18:28.926 10:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:28.926 10:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:28.926 10:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:28.926 10:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:28.926 10:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:28.926 10:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:28.926 10:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81053 00:18:28.926 10:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:28.926 10:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:28.926 [2024-07-25 10:56:58.542949] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:28.926 10:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:29.184 10:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:18:35.749 10:57:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:18:35.749 10:57:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81228 00:18:35.749 10:57:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:35.749 10:57:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80372 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:42.319 10:57:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:42.319 10:57:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:42.319 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:42.319 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:42.319 Attaching 4 probes... 00:18:42.319 @path[10.0.0.2, 4421]: 16792 00:18:42.319 @path[10.0.0.2, 4421]: 17508 00:18:42.319 @path[10.0.0.2, 4421]: 17232 00:18:42.319 @path[10.0.0.2, 4421]: 16630 00:18:42.319 @path[10.0.0.2, 4421]: 16727 00:18:42.319 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:42.319 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:42.319 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:42.319 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:42.319 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:42.319 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:42.319 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81228 00:18:42.319 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:42.319 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 80428 00:18:42.319 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 80428 ']' 00:18:42.319 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 80428 00:18:42.319 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:18:42.319 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:42.319 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80428 00:18:42.319 killing process with pid 80428 00:18:42.319 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:42.319 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:42.319 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80428' 00:18:42.319 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 80428 00:18:42.319 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 80428 00:18:42.319 Connection closed with partial response: 00:18:42.319 00:18:42.319 00:18:42.319 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 80428 00:18:42.319 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:42.319 [2024-07-25 10:56:13.716335] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:42.319 [2024-07-25 10:56:13.716460] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80428 ] 00:18:42.319 [2024-07-25 10:56:13.850571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.319 [2024-07-25 10:56:13.970121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:42.319 [2024-07-25 10:56:14.022111] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:42.319 Running I/O for 90 seconds... 00:18:42.319 [2024-07-25 10:56:23.924153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:32504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.320 [2024-07-25 10:56:23.924237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:42.320 [2024-07-25 10:56:23.924298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:32512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.320 [2024-07-25 10:56:23.924319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.320 [2024-07-25 10:56:23.924342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:32520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.320 [2024-07-25 10:56:23.924358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:42.320 [2024-07-25 10:56:23.924379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:32528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.320 [2024-07-25 10:56:23.924393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:42.320 [2024-07-25 10:56:23.924414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.320 [2024-07-25 10:56:23.924429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:42.320 [2024-07-25 10:56:23.924449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:32544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.320 [2024-07-25 10:56:23.924465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:42.320 [2024-07-25 10:56:23.924485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:32552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.320 [2024-07-25 10:56:23.924500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:42.320 [2024-07-25 10:56:23.924521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:32560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.320 [2024-07-25 10:56:23.924536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:42.320 [2024-07-25 10:56:23.924556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.320 [2024-07-25 10:56:23.924571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:42.320 [2024-07-25 10:56:23.924592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.320 [2024-07-25 10:56:23.924607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:42.320 [2024-07-25 10:56:23.924627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:31944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.320 [2024-07-25 10:56:23.924676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:42.320 [2024-07-25 10:56:23.924698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:31952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.320 [2024-07-25 10:56:23.924715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:42.320 [2024-07-25 10:56:23.924735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:31960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.320 [2024-07-25 10:56:23.924749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:42.320 [2024-07-25 10:56:23.924769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:31968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.320 [2024-07-25 10:56:23.924783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:42.320 [2024-07-25 10:56:23.924803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:31976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.320 [2024-07-25 10:56:23.924817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:42.320 [2024-07-25 10:56:23.924837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:31984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.320 [2024-07-25 10:56:23.924864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:42.320 [2024-07-25 10:56:23.924889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:31992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.320 [2024-07-25 10:56:23.924905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:42.320 [2024-07-25 10:56:23.924927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:32000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.320 [2024-07-25 10:56:23.924942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:42.320 [2024-07-25 10:56:23.924962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.320 [2024-07-25 10:56:23.924977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:42.320 [2024-07-25 10:56:23.924997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:32016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.320 [2024-07-25 10:56:23.925012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:42.320 [2024-07-25 10:56:23.925032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:32024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.320 [2024-07-25 10:56:23.925047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:42.320 [2024-07-25 10:56:23.925066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:32032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.320 [2024-07-25 10:56:23.925081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:42.320 [2024-07-25 10:56:23.925101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:32040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.320 [2024-07-25 10:56:23.925124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:42.320 [2024-07-25 10:56:23.925146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:32048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.320 [2024-07-25 10:56:23.925161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:42.320 [2024-07-25 10:56:23.925185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:32568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.320 [2024-07-25 10:56:23.925201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:42.320 [2024-07-25 10:56:23.925221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.320 [2024-07-25 10:56:23.925236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:42.320 [2024-07-25 10:56:23.925255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:32584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.320 [2024-07-25 10:56:23.925270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:42.320 [2024-07-25 10:56:23.925289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:32592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.320 [2024-07-25 10:56:23.925304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:42.320 [2024-07-25 10:56:23.925324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:32600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.320 [2024-07-25 10:56:23.925339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:42.320 [2024-07-25 10:56:23.925358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:32608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.320 [2024-07-25 10:56:23.925373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:42.320 [2024-07-25 10:56:23.925392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:32616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.320 [2024-07-25 10:56:23.925407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:42.320 [2024-07-25 10:56:23.925427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.320 [2024-07-25 10:56:23.925442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:42.320 [2024-07-25 10:56:23.925463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:32056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.320 [2024-07-25 10:56:23.925478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.320 [2024-07-25 10:56:23.925499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:32064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.320 [2024-07-25 10:56:23.925514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.320 [2024-07-25 10:56:23.925536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:32072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.320 [2024-07-25 10:56:23.925551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:42.320 [2024-07-25 10:56:23.925579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:32080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.320 [2024-07-25 10:56:23.925595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:42.320 [2024-07-25 10:56:23.925615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:32088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.320 [2024-07-25 10:56:23.925646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:42.320 [2024-07-25 10:56:23.925667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:32096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.320 [2024-07-25 10:56:23.925683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:42.320 [2024-07-25 10:56:23.925703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:32104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.321 [2024-07-25 10:56:23.925718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:42.321 [2024-07-25 10:56:23.925739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:32112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.321 [2024-07-25 10:56:23.925754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:42.321 [2024-07-25 10:56:23.925775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:32120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.321 [2024-07-25 10:56:23.925790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:42.321 [2024-07-25 10:56:23.925811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:32128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.321 [2024-07-25 10:56:23.925826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:42.321 [2024-07-25 10:56:23.925847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:32136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.321 [2024-07-25 10:56:23.925862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:42.321 [2024-07-25 10:56:23.925896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:32144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.321 [2024-07-25 10:56:23.925912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:42.321 [2024-07-25 10:56:23.925933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:32152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.321 [2024-07-25 10:56:23.925964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:42.321 [2024-07-25 10:56:23.925984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.321 [2024-07-25 10:56:23.926000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:42.321 [2024-07-25 10:56:23.926047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.321 [2024-07-25 10:56:23.926065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:42.321 [2024-07-25 10:56:23.926095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:32176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.321 [2024-07-25 10:56:23.926111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:42.321 [2024-07-25 10:56:23.926136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:32632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.321 [2024-07-25 10:56:23.926152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:42.321 [2024-07-25 10:56:23.926174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:32640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.321 [2024-07-25 10:56:23.926190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:42.321 [2024-07-25 10:56:23.926212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.321 [2024-07-25 10:56:23.926228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:42.321 [2024-07-25 10:56:23.926249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:32656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.321 [2024-07-25 10:56:23.926265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:42.321 [2024-07-25 10:56:23.926285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:32664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.321 [2024-07-25 10:56:23.926300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:42.321 [2024-07-25 10:56:23.926321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.321 [2024-07-25 10:56:23.926336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:42.321 [2024-07-25 10:56:23.926357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:32680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.321 [2024-07-25 10:56:23.926372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:42.321 [2024-07-25 10:56:23.926392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:32688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.321 [2024-07-25 10:56:23.926407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:42.321 [2024-07-25 10:56:23.926428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:32184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.321 [2024-07-25 10:56:23.926443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:42.321 [2024-07-25 10:56:23.926463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:32192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.321 [2024-07-25 10:56:23.926493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:42.321 [2024-07-25 10:56:23.926513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:32200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.321 [2024-07-25 10:56:23.926528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:42.321 [2024-07-25 10:56:23.926555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:32208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.321 [2024-07-25 10:56:23.926571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:42.321 [2024-07-25 10:56:23.926591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:32216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.321 [2024-07-25 10:56:23.926606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:42.321 [2024-07-25 10:56:23.926626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:32224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.321 [2024-07-25 10:56:23.926641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:42.321 [2024-07-25 10:56:23.926661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:32232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.321 [2024-07-25 10:56:23.926676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:42.321 [2024-07-25 10:56:23.926697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:32240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.321 [2024-07-25 10:56:23.926713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:42.321 [2024-07-25 10:56:23.926733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:32248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.321 [2024-07-25 10:56:23.926748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:42.321 [2024-07-25 10:56:23.926768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:32256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.321 [2024-07-25 10:56:23.926784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.321 [2024-07-25 10:56:23.926805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:32264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.321 [2024-07-25 10:56:23.926820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:42.321 [2024-07-25 10:56:23.926840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:32272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.321 [2024-07-25 10:56:23.926856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:42.321 [2024-07-25 10:56:23.926877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:32280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.321 [2024-07-25 10:56:23.926905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:42.321 [2024-07-25 10:56:23.926927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:32288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.321 [2024-07-25 10:56:23.926943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:42.321 [2024-07-25 10:56:23.926962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.321 [2024-07-25 10:56:23.926978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:42.321 [2024-07-25 10:56:23.926998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.321 [2024-07-25 10:56:23.927020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:42.321 [2024-07-25 10:56:23.927045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:32696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.321 [2024-07-25 10:56:23.927061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:42.321 [2024-07-25 10:56:23.927080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:32704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.321 [2024-07-25 10:56:23.927095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:42.321 [2024-07-25 10:56:23.927115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:32712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.321 [2024-07-25 10:56:23.927130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:42.321 [2024-07-25 10:56:23.927150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:32720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.321 [2024-07-25 10:56:23.927165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:42.321 [2024-07-25 10:56:23.927185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:32728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.321 [2024-07-25 10:56:23.927199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:42.322 [2024-07-25 10:56:23.927220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:32736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.322 [2024-07-25 10:56:23.927234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:42.322 [2024-07-25 10:56:23.927254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:32744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.322 [2024-07-25 10:56:23.927268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:42.322 [2024-07-25 10:56:23.927288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:32752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.322 [2024-07-25 10:56:23.927303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:42.322 [2024-07-25 10:56:23.927323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:32760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.322 [2024-07-25 10:56:23.927338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:42.322 [2024-07-25 10:56:23.927359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:32768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.322 [2024-07-25 10:56:23.927375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:42.322 [2024-07-25 10:56:23.927395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:32776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.322 [2024-07-25 10:56:23.927410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:42.322 [2024-07-25 10:56:23.927430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:32784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.322 [2024-07-25 10:56:23.927467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:42.322 [2024-07-25 10:56:23.927490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:32792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.322 [2024-07-25 10:56:23.927506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:42.322 [2024-07-25 10:56:23.927527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:32800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.322 [2024-07-25 10:56:23.927559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:42.322 [2024-07-25 10:56:23.927580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:32808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.322 [2024-07-25 10:56:23.927596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:42.322 [2024-07-25 10:56:23.927617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:32816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.322 [2024-07-25 10:56:23.927632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:42.322 [2024-07-25 10:56:23.927654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:32824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.322 [2024-07-25 10:56:23.927669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:42.322 [2024-07-25 10:56:23.927690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.322 [2024-07-25 10:56:23.927707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:42.322 [2024-07-25 10:56:23.927728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:32840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.322 [2024-07-25 10:56:23.927743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:42.322 [2024-07-25 10:56:23.927764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.322 [2024-07-25 10:56:23.927780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:42.322 [2024-07-25 10:56:23.927801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:32312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.322 [2024-07-25 10:56:23.927817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:42.322 [2024-07-25 10:56:23.927838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:32320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.322 [2024-07-25 10:56:23.927853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:42.322 [2024-07-25 10:56:23.927874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:32328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.322 [2024-07-25 10:56:23.927904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:42.322 [2024-07-25 10:56:23.927927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:32336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.322 [2024-07-25 10:56:23.927943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:42.322 [2024-07-25 10:56:23.928002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:32344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.322 [2024-07-25 10:56:23.928020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:42.322 [2024-07-25 10:56:23.928042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:32352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.322 [2024-07-25 10:56:23.928057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.322 [2024-07-25 10:56:23.928079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:32360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.322 [2024-07-25 10:56:23.928095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:42.322 [2024-07-25 10:56:23.928116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:32368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.322 [2024-07-25 10:56:23.928132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:42.322 [2024-07-25 10:56:23.928163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:32856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.322 [2024-07-25 10:56:23.928180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:42.322 [2024-07-25 10:56:23.928201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:32864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.322 [2024-07-25 10:56:23.928217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:42.322 [2024-07-25 10:56:23.928238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:32872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.322 [2024-07-25 10:56:23.928254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:42.322 [2024-07-25 10:56:23.928290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:32880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.322 [2024-07-25 10:56:23.928322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:42.322 [2024-07-25 10:56:23.928347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:32888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.322 [2024-07-25 10:56:23.928364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:42.322 [2024-07-25 10:56:23.928385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:32896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.322 [2024-07-25 10:56:23.928401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:42.322 [2024-07-25 10:56:23.928422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:32904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.322 [2024-07-25 10:56:23.928438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:42.322 [2024-07-25 10:56:23.928459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:32912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.322 [2024-07-25 10:56:23.928474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:42.322 [2024-07-25 10:56:23.928503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.322 [2024-07-25 10:56:23.928519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:42.322 [2024-07-25 10:56:23.928540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:32928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.322 [2024-07-25 10:56:23.928555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:42.322 [2024-07-25 10:56:23.928576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.322 [2024-07-25 10:56:23.928592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:42.322 [2024-07-25 10:56:23.928614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:32944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.322 [2024-07-25 10:56:23.928645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:42.322 [2024-07-25 10:56:23.928665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.322 [2024-07-25 10:56:23.928680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:42.322 [2024-07-25 10:56:23.928701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.322 [2024-07-25 10:56:23.928731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:42.322 [2024-07-25 10:56:23.928751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:32392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.322 [2024-07-25 10:56:23.928766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:42.322 [2024-07-25 10:56:23.928786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.323 [2024-07-25 10:56:23.928801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:42.323 [2024-07-25 10:56:23.928822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.323 [2024-07-25 10:56:23.928837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:42.323 [2024-07-25 10:56:23.928857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:32416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.323 [2024-07-25 10:56:23.928873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:42.323 [2024-07-25 10:56:23.928893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:32424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.323 [2024-07-25 10:56:23.928907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:42.323 [2024-07-25 10:56:23.928927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:32432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.323 [2024-07-25 10:56:23.928953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:42.323 [2024-07-25 10:56:23.928976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:32440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.323 [2024-07-25 10:56:23.928997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:42.323 [2024-07-25 10:56:23.929019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:32448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.323 [2024-07-25 10:56:23.929034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:42.323 [2024-07-25 10:56:23.929054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:32456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.323 [2024-07-25 10:56:23.929068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:42.323 [2024-07-25 10:56:23.929088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:32464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.323 [2024-07-25 10:56:23.929102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:42.323 [2024-07-25 10:56:23.929122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:32472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.323 [2024-07-25 10:56:23.929137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:42.323 [2024-07-25 10:56:23.929157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.323 [2024-07-25 10:56:23.929171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:42.323 [2024-07-25 10:56:23.929191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:32488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.323 [2024-07-25 10:56:23.929206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:42.323 [2024-07-25 10:56:23.930551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:32496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.323 [2024-07-25 10:56:23.930597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:42.323 [2024-07-25 10:56:30.446472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:49536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.323 [2024-07-25 10:56:30.446570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:42.323 [2024-07-25 10:56:30.446634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:49544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.323 [2024-07-25 10:56:30.446654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:42.323 [2024-07-25 10:56:30.446676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:49552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.323 [2024-07-25 10:56:30.446691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:42.323 [2024-07-25 10:56:30.446711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:49560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.323 [2024-07-25 10:56:30.446726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:42.323 [2024-07-25 10:56:30.446746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.323 [2024-07-25 10:56:30.446786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:42.323 [2024-07-25 10:56:30.446808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:49576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.323 [2024-07-25 10:56:30.446823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:42.323 [2024-07-25 10:56:30.446843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:49584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.323 [2024-07-25 10:56:30.446873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:42.323 [2024-07-25 10:56:30.446895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:49592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.323 [2024-07-25 10:56:30.446910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:42.323 [2024-07-25 10:56:30.446930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.323 [2024-07-25 10:56:30.446944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:42.323 [2024-07-25 10:56:30.446963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:49032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.323 [2024-07-25 10:56:30.446978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:42.323 [2024-07-25 10:56:30.446998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:49040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.323 [2024-07-25 10:56:30.447013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:42.323 [2024-07-25 10:56:30.447033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:49048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.323 [2024-07-25 10:56:30.447047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:42.323 [2024-07-25 10:56:30.447067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:49056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.323 [2024-07-25 10:56:30.447081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:42.323 [2024-07-25 10:56:30.447101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:49064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.323 [2024-07-25 10:56:30.447116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:42.323 [2024-07-25 10:56:30.447136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:49072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.323 [2024-07-25 10:56:30.447149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:42.323 [2024-07-25 10:56:30.447169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:49080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.323 [2024-07-25 10:56:30.447183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:42.323 [2024-07-25 10:56:30.447207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:49600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.323 [2024-07-25 10:56:30.447222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:42.323 [2024-07-25 10:56:30.447259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.323 [2024-07-25 10:56:30.447275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:42.323 [2024-07-25 10:56:30.447295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:49616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.323 [2024-07-25 10:56:30.447309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:42.323 [2024-07-25 10:56:30.447329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:49624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.323 [2024-07-25 10:56:30.447343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:42.323 [2024-07-25 10:56:30.447362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:49632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.323 [2024-07-25 10:56:30.447377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:42.323 [2024-07-25 10:56:30.447397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:49640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.323 [2024-07-25 10:56:30.447411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:42.323 [2024-07-25 10:56:30.447430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:49648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.323 [2024-07-25 10:56:30.447444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:42.323 [2024-07-25 10:56:30.447463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.323 [2024-07-25 10:56:30.447477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:42.323 [2024-07-25 10:56:30.447497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:49664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.323 [2024-07-25 10:56:30.447511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:42.323 [2024-07-25 10:56:30.447530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:49672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.323 [2024-07-25 10:56:30.447544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:42.324 [2024-07-25 10:56:30.447563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.324 [2024-07-25 10:56:30.447578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:42.324 [2024-07-25 10:56:30.447597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:49688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.324 [2024-07-25 10:56:30.447611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:42.324 [2024-07-25 10:56:30.447630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:49696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.324 [2024-07-25 10:56:30.447644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:42.324 [2024-07-25 10:56:30.447671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:49704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.324 [2024-07-25 10:56:30.447686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.324 [2024-07-25 10:56:30.447706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:49712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.324 [2024-07-25 10:56:30.447720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:42.324 [2024-07-25 10:56:30.447740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:49720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.324 [2024-07-25 10:56:30.447754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:42.324 [2024-07-25 10:56:30.447774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:49088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.324 [2024-07-25 10:56:30.447787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:42.324 [2024-07-25 10:56:30.447824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:49096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.324 [2024-07-25 10:56:30.447839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:42.324 [2024-07-25 10:56:30.447859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:49104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.324 [2024-07-25 10:56:30.447890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:42.324 [2024-07-25 10:56:30.447913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:49112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.324 [2024-07-25 10:56:30.447928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:42.324 [2024-07-25 10:56:30.447948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:49120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.324 [2024-07-25 10:56:30.447962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:42.324 [2024-07-25 10:56:30.447982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:49128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.324 [2024-07-25 10:56:30.447997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:42.324 [2024-07-25 10:56:30.448017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:49136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.324 [2024-07-25 10:56:30.448032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:42.324 [2024-07-25 10:56:30.448052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:49144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.324 [2024-07-25 10:56:30.448066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:42.324 [2024-07-25 10:56:30.448086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:49152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.324 [2024-07-25 10:56:30.448101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:42.324 [2024-07-25 10:56:30.448120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:49160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.324 [2024-07-25 10:56:30.448157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:42.324 [2024-07-25 10:56:30.448178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:49168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.324 [2024-07-25 10:56:30.448193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:42.324 [2024-07-25 10:56:30.448212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:49176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.324 [2024-07-25 10:56:30.448226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:42.324 [2024-07-25 10:56:30.448246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:49184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.324 [2024-07-25 10:56:30.448264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:42.324 [2024-07-25 10:56:30.448284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:49192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.324 [2024-07-25 10:56:30.448298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:42.324 [2024-07-25 10:56:30.448317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:49200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.324 [2024-07-25 10:56:30.448331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:42.324 [2024-07-25 10:56:30.448351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:49208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.324 [2024-07-25 10:56:30.448365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:42.324 [2024-07-25 10:56:30.448385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:49216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.324 [2024-07-25 10:56:30.448399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:42.324 [2024-07-25 10:56:30.448420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:49224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.324 [2024-07-25 10:56:30.448434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:42.324 [2024-07-25 10:56:30.448453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:49232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.324 [2024-07-25 10:56:30.448468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:42.325 [2024-07-25 10:56:30.448487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:49240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.325 [2024-07-25 10:56:30.448501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:42.325 [2024-07-25 10:56:30.448520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:49248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.325 [2024-07-25 10:56:30.448534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:42.325 [2024-07-25 10:56:30.448554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:49256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.325 [2024-07-25 10:56:30.448574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:42.325 [2024-07-25 10:56:30.448594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:49264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.325 [2024-07-25 10:56:30.448609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:42.325 [2024-07-25 10:56:30.448629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:49272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.325 [2024-07-25 10:56:30.448643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:42.325 [2024-07-25 10:56:30.448668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:49728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.325 [2024-07-25 10:56:30.448684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:42.325 [2024-07-25 10:56:30.448703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:49736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.325 [2024-07-25 10:56:30.448718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:42.325 [2024-07-25 10:56:30.448737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:49744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.325 [2024-07-25 10:56:30.448752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:42.325 [2024-07-25 10:56:30.448772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:49752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.325 [2024-07-25 10:56:30.448786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:42.325 [2024-07-25 10:56:30.448805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.325 [2024-07-25 10:56:30.448819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.325 [2024-07-25 10:56:30.448839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:49768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.325 [2024-07-25 10:56:30.448854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.325 [2024-07-25 10:56:30.448886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:49776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.325 [2024-07-25 10:56:30.448904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:42.325 [2024-07-25 10:56:30.448924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:49784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.325 [2024-07-25 10:56:30.448938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:42.325 [2024-07-25 10:56:30.448958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:49280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.325 [2024-07-25 10:56:30.448973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:42.325 [2024-07-25 10:56:30.448995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:49288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.325 [2024-07-25 10:56:30.449010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:42.325 [2024-07-25 10:56:30.449037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:49296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.325 [2024-07-25 10:56:30.449053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:42.325 [2024-07-25 10:56:30.449072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:49304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.325 [2024-07-25 10:56:30.449087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:42.325 [2024-07-25 10:56:30.449107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:49312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.325 [2024-07-25 10:56:30.449121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:42.325 [2024-07-25 10:56:30.449141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:49320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.325 [2024-07-25 10:56:30.449155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:42.325 [2024-07-25 10:56:30.449174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:49328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.325 [2024-07-25 10:56:30.449188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:42.325 [2024-07-25 10:56:30.449208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:49336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.325 [2024-07-25 10:56:30.449222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:42.325 [2024-07-25 10:56:30.449241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:49792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.325 [2024-07-25 10:56:30.449256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:42.325 [2024-07-25 10:56:30.449275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:49800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.325 [2024-07-25 10:56:30.449290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:42.325 [2024-07-25 10:56:30.449310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:49808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.325 [2024-07-25 10:56:30.449324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:42.325 [2024-07-25 10:56:30.449344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:49816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.325 [2024-07-25 10:56:30.449358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:42.325 [2024-07-25 10:56:30.449377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:49824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.325 [2024-07-25 10:56:30.449391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:42.325 [2024-07-25 10:56:30.449410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:49832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.325 [2024-07-25 10:56:30.449425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:42.325 [2024-07-25 10:56:30.449451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:49840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.325 [2024-07-25 10:56:30.449467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:42.325 [2024-07-25 10:56:30.449486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.325 [2024-07-25 10:56:30.449500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:42.325 [2024-07-25 10:56:30.449519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:49856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.325 [2024-07-25 10:56:30.449533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:42.325 [2024-07-25 10:56:30.449561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:49864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.325 [2024-07-25 10:56:30.449577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:42.325 [2024-07-25 10:56:30.449596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:49872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.325 [2024-07-25 10:56:30.449611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:42.325 [2024-07-25 10:56:30.449630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:49880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.325 [2024-07-25 10:56:30.449644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:42.325 [2024-07-25 10:56:30.449663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.325 [2024-07-25 10:56:30.449677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:42.325 [2024-07-25 10:56:30.449697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:49896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.325 [2024-07-25 10:56:30.449711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:42.326 [2024-07-25 10:56:30.449731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.326 [2024-07-25 10:56:30.449745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:42.326 [2024-07-25 10:56:30.449765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:49912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.326 [2024-07-25 10:56:30.449779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:42.326 [2024-07-25 10:56:30.449820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:49920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.326 [2024-07-25 10:56:30.449839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:42.326 [2024-07-25 10:56:30.449872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:49928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.326 [2024-07-25 10:56:30.449890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:42.326 [2024-07-25 10:56:30.449910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:49936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.326 [2024-07-25 10:56:30.449933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:42.326 [2024-07-25 10:56:30.449953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:49944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.326 [2024-07-25 10:56:30.449967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:42.326 [2024-07-25 10:56:30.449987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:49952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.326 [2024-07-25 10:56:30.450002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:42.326 [2024-07-25 10:56:30.450047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:49960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.326 [2024-07-25 10:56:30.450065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.326 [2024-07-25 10:56:30.450085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.326 [2024-07-25 10:56:30.450099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:42.326 [2024-07-25 10:56:30.450119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:49976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.326 [2024-07-25 10:56:30.450134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:42.326 [2024-07-25 10:56:30.450153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:49344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.326 [2024-07-25 10:56:30.450168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:42.326 [2024-07-25 10:56:30.450189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:49352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.326 [2024-07-25 10:56:30.450204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:42.326 [2024-07-25 10:56:30.450225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:49360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.326 [2024-07-25 10:56:30.450240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:42.326 [2024-07-25 10:56:30.450260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:49368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.326 [2024-07-25 10:56:30.450274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:42.326 [2024-07-25 10:56:30.450295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:49376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.326 [2024-07-25 10:56:30.450310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:42.326 [2024-07-25 10:56:30.450330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:49384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.326 [2024-07-25 10:56:30.450354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:42.326 [2024-07-25 10:56:30.450373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:49392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.326 [2024-07-25 10:56:30.450395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:42.326 [2024-07-25 10:56:30.450416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:49400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.326 [2024-07-25 10:56:30.450431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:42.326 [2024-07-25 10:56:30.450451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:49408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.326 [2024-07-25 10:56:30.450466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:42.326 [2024-07-25 10:56:30.450485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:49416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.326 [2024-07-25 10:56:30.450514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:42.326 [2024-07-25 10:56:30.450534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:49424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.326 [2024-07-25 10:56:30.450549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:42.326 [2024-07-25 10:56:30.450568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:49432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.326 [2024-07-25 10:56:30.450582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:42.326 [2024-07-25 10:56:30.450602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:49440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.326 [2024-07-25 10:56:30.450616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:42.326 [2024-07-25 10:56:30.450635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:49448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.326 [2024-07-25 10:56:30.450649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:42.326 [2024-07-25 10:56:30.450668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:49456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.326 [2024-07-25 10:56:30.450683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:42.326 [2024-07-25 10:56:30.451373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:49464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.326 [2024-07-25 10:56:30.451399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:42.326 [2024-07-25 10:56:30.451431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:49984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.326 [2024-07-25 10:56:30.451448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:42.326 [2024-07-25 10:56:30.451476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:49992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.326 [2024-07-25 10:56:30.451491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:42.326 [2024-07-25 10:56:30.451518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:50000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.326 [2024-07-25 10:56:30.451533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:42.326 [2024-07-25 10:56:30.451579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:50008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.326 [2024-07-25 10:56:30.451596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:42.326 [2024-07-25 10:56:30.451624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:50016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.326 [2024-07-25 10:56:30.451638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:42.326 [2024-07-25 10:56:30.451665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:50024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.326 [2024-07-25 10:56:30.451680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:42.326 [2024-07-25 10:56:30.451707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:50032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.326 [2024-07-25 10:56:30.451722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:42.326 [2024-07-25 10:56:30.451765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:50040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.326 [2024-07-25 10:56:30.451784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:42.326 [2024-07-25 10:56:30.451812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:49472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.326 [2024-07-25 10:56:30.451827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:42.327 [2024-07-25 10:56:30.451868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:49480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.327 [2024-07-25 10:56:30.451886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:42.327 [2024-07-25 10:56:30.451914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:49488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.327 [2024-07-25 10:56:30.451929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:42.327 [2024-07-25 10:56:30.451957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:49496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.327 [2024-07-25 10:56:30.451972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:42.327 [2024-07-25 10:56:30.451999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:49504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.327 [2024-07-25 10:56:30.452013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:42.327 [2024-07-25 10:56:30.452040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:49512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.327 [2024-07-25 10:56:30.452055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.327 [2024-07-25 10:56:30.452082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:49520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.327 [2024-07-25 10:56:30.452097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:42.327 [2024-07-25 10:56:30.452132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.327 [2024-07-25 10:56:30.452148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:42.327 [2024-07-25 10:56:37.481594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:61408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.327 [2024-07-25 10:56:37.481662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:42.327 [2024-07-25 10:56:37.481740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:61416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.327 [2024-07-25 10:56:37.481760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:42.327 [2024-07-25 10:56:37.481782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:61424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.327 [2024-07-25 10:56:37.481797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:42.327 [2024-07-25 10:56:37.481816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.327 [2024-07-25 10:56:37.481830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:42.327 [2024-07-25 10:56:37.481849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:61440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.327 [2024-07-25 10:56:37.481863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:42.327 [2024-07-25 10:56:37.481900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:61448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.327 [2024-07-25 10:56:37.481915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:42.327 [2024-07-25 10:56:37.481935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:61456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.327 [2024-07-25 10:56:37.481949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:42.327 [2024-07-25 10:56:37.481968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.327 [2024-07-25 10:56:37.481983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:42.327 [2024-07-25 10:56:37.482249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.327 [2024-07-25 10:56:37.482275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.327 [2024-07-25 10:56:37.482302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:61480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.327 [2024-07-25 10:56:37.482319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.327 [2024-07-25 10:56:37.482360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:61488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.327 [2024-07-25 10:56:37.482390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:42.327 [2024-07-25 10:56:37.482410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:61496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.327 [2024-07-25 10:56:37.482447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:42.327 [2024-07-25 10:56:37.482470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:61504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.327 [2024-07-25 10:56:37.482485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:42.327 [2024-07-25 10:56:37.482511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:61512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.327 [2024-07-25 10:56:37.482525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:42.327 [2024-07-25 10:56:37.482545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:61520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.327 [2024-07-25 10:56:37.482558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:42.327 [2024-07-25 10:56:37.482577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.327 [2024-07-25 10:56:37.482591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:42.327 [2024-07-25 10:56:37.482610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:60896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.327 [2024-07-25 10:56:37.482625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:42.327 [2024-07-25 10:56:37.482648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:60904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.327 [2024-07-25 10:56:37.482663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:42.327 [2024-07-25 10:56:37.482682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:60912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.327 [2024-07-25 10:56:37.482696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:42.327 [2024-07-25 10:56:37.482716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:60920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.327 [2024-07-25 10:56:37.482729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:42.327 [2024-07-25 10:56:37.482749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:60928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.327 [2024-07-25 10:56:37.482763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:42.327 [2024-07-25 10:56:37.482782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:60936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.327 [2024-07-25 10:56:37.482795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:42.327 [2024-07-25 10:56:37.482815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:60944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.327 [2024-07-25 10:56:37.482829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:42.327 [2024-07-25 10:56:37.482854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:60952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.327 [2024-07-25 10:56:37.482877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:42.327 [2024-07-25 10:56:37.482898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:60960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.327 [2024-07-25 10:56:37.482927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:42.327 [2024-07-25 10:56:37.482951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:60968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.327 [2024-07-25 10:56:37.482966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:42.327 [2024-07-25 10:56:37.482985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:60976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.327 [2024-07-25 10:56:37.482999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:42.327 [2024-07-25 10:56:37.483019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:60984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.327 [2024-07-25 10:56:37.483033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:42.327 [2024-07-25 10:56:37.483052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:60992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.327 [2024-07-25 10:56:37.483066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:42.328 [2024-07-25 10:56:37.483085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:61000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.328 [2024-07-25 10:56:37.483099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:42.328 [2024-07-25 10:56:37.483119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:61008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.328 [2024-07-25 10:56:37.483133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:42.328 [2024-07-25 10:56:37.483153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:61016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.328 [2024-07-25 10:56:37.483167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:42.328 [2024-07-25 10:56:37.483191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:61536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.328 [2024-07-25 10:56:37.483207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:42.328 [2024-07-25 10:56:37.483228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.328 [2024-07-25 10:56:37.483242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:42.328 [2024-07-25 10:56:37.483261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:61552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.328 [2024-07-25 10:56:37.483276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:42.328 [2024-07-25 10:56:37.483295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:61560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.328 [2024-07-25 10:56:37.483317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:42.328 [2024-07-25 10:56:37.483339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.328 [2024-07-25 10:56:37.483353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:42.328 [2024-07-25 10:56:37.483373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:61576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.328 [2024-07-25 10:56:37.483387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:42.328 [2024-07-25 10:56:37.483422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.328 [2024-07-25 10:56:37.483436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:42.328 [2024-07-25 10:56:37.483456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:61592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.328 [2024-07-25 10:56:37.483470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:42.328 [2024-07-25 10:56:37.483490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:61024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.328 [2024-07-25 10:56:37.483505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:42.328 [2024-07-25 10:56:37.483525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:61032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.328 [2024-07-25 10:56:37.483539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.328 [2024-07-25 10:56:37.483559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:61040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.328 [2024-07-25 10:56:37.483574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:42.328 [2024-07-25 10:56:37.483594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:61048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.328 [2024-07-25 10:56:37.483608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:42.328 [2024-07-25 10:56:37.483628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:61056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.328 [2024-07-25 10:56:37.483642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:42.328 [2024-07-25 10:56:37.483661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:61064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.328 [2024-07-25 10:56:37.483676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:42.328 [2024-07-25 10:56:37.483696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:61072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.328 [2024-07-25 10:56:37.483710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:42.328 [2024-07-25 10:56:37.483730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:61080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.328 [2024-07-25 10:56:37.483746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:42.328 [2024-07-25 10:56:37.483789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:61088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.328 [2024-07-25 10:56:37.483804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:42.328 [2024-07-25 10:56:37.483825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:61096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.328 [2024-07-25 10:56:37.483839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:42.328 [2024-07-25 10:56:37.483879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:61104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.328 [2024-07-25 10:56:37.483904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:42.328 [2024-07-25 10:56:37.483929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:61112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.328 [2024-07-25 10:56:37.483944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:42.328 [2024-07-25 10:56:37.483964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:61120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.328 [2024-07-25 10:56:37.483979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:42.328 [2024-07-25 10:56:37.483999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:61128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.328 [2024-07-25 10:56:37.484013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:42.328 [2024-07-25 10:56:37.484033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:61136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.328 [2024-07-25 10:56:37.484047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:42.328 [2024-07-25 10:56:37.484068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:61144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.328 [2024-07-25 10:56:37.484082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:42.328 [2024-07-25 10:56:37.484106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:61600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.328 [2024-07-25 10:56:37.484121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:42.328 [2024-07-25 10:56:37.484141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:61608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.328 [2024-07-25 10:56:37.484156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:42.328 [2024-07-25 10:56:37.484176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.328 [2024-07-25 10:56:37.484190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:42.328 [2024-07-25 10:56:37.484210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:61624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.328 [2024-07-25 10:56:37.484224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:42.328 [2024-07-25 10:56:37.484284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.328 [2024-07-25 10:56:37.484299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:42.328 [2024-07-25 10:56:37.484320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:61640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.328 [2024-07-25 10:56:37.484334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:42.328 [2024-07-25 10:56:37.484354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:61648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.328 [2024-07-25 10:56:37.484369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:42.328 [2024-07-25 10:56:37.484389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:61656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.328 [2024-07-25 10:56:37.484404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:42.328 [2024-07-25 10:56:37.484425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:61152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.328 [2024-07-25 10:56:37.484439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:42.328 [2024-07-25 10:56:37.484465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:61160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.328 [2024-07-25 10:56:37.484483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:42.328 [2024-07-25 10:56:37.484503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:61168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.329 [2024-07-25 10:56:37.484518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:42.329 [2024-07-25 10:56:37.484538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:61176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.329 [2024-07-25 10:56:37.484555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:42.329 [2024-07-25 10:56:37.484575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:61184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.329 [2024-07-25 10:56:37.484589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:42.329 [2024-07-25 10:56:37.484609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:61192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.329 [2024-07-25 10:56:37.484623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:42.330 [2024-07-25 10:56:37.484643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:61200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.330 [2024-07-25 10:56:37.484657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:42.330 [2024-07-25 10:56:37.484677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:61208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.330 [2024-07-25 10:56:37.484691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:42.330 [2024-07-25 10:56:37.484711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:61216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.330 [2024-07-25 10:56:37.484757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:42.330 [2024-07-25 10:56:37.484779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:61224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.330 [2024-07-25 10:56:37.484793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.330 [2024-07-25 10:56:37.484814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:61232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.330 [2024-07-25 10:56:37.484828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:42.330 [2024-07-25 10:56:37.484848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:61240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.330 [2024-07-25 10:56:37.484863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:42.330 [2024-07-25 10:56:37.484895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:61248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.330 [2024-07-25 10:56:37.484913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:42.330 [2024-07-25 10:56:37.484935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:61256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.330 [2024-07-25 10:56:37.484949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:42.330 [2024-07-25 10:56:37.484969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:61264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.330 [2024-07-25 10:56:37.484984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:42.330 [2024-07-25 10:56:37.485004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:61272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.330 [2024-07-25 10:56:37.485019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:42.330 [2024-07-25 10:56:37.485294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:61664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.330 [2024-07-25 10:56:37.485319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:42.330 [2024-07-25 10:56:37.485349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.330 [2024-07-25 10:56:37.485365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:42.330 [2024-07-25 10:56:37.485392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.330 [2024-07-25 10:56:37.485407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:42.330 [2024-07-25 10:56:37.485444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:61688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.330 [2024-07-25 10:56:37.485460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:42.330 [2024-07-25 10:56:37.485485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:61696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.330 [2024-07-25 10:56:37.485512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:42.330 [2024-07-25 10:56:37.485550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:61704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.330 [2024-07-25 10:56:37.485565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:42.330 [2024-07-25 10:56:37.485590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:61712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.330 [2024-07-25 10:56:37.485604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:42.330 [2024-07-25 10:56:37.485630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.330 [2024-07-25 10:56:37.485644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:42.330 [2024-07-25 10:56:37.485669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:61728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.330 [2024-07-25 10:56:37.485684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:42.330 [2024-07-25 10:56:37.485709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.330 [2024-07-25 10:56:37.485724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:42.330 [2024-07-25 10:56:37.485757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:61744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.330 [2024-07-25 10:56:37.485772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:42.330 [2024-07-25 10:56:37.485797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.330 [2024-07-25 10:56:37.485811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:42.330 [2024-07-25 10:56:37.485836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.330 [2024-07-25 10:56:37.485864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:42.330 [2024-07-25 10:56:37.485894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.330 [2024-07-25 10:56:37.485910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:42.330 [2024-07-25 10:56:37.485935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:61776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.330 [2024-07-25 10:56:37.485949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:42.330 [2024-07-25 10:56:37.485974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.330 [2024-07-25 10:56:37.485989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:42.330 [2024-07-25 10:56:37.486024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:61792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.330 [2024-07-25 10:56:37.486058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:42.330 [2024-07-25 10:56:37.486094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:61800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.330 [2024-07-25 10:56:37.486111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:42.330 [2024-07-25 10:56:37.486138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:61808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.330 [2024-07-25 10:56:37.486153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:42.330 [2024-07-25 10:56:37.486185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.330 [2024-07-25 10:56:37.486202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:42.330 [2024-07-25 10:56:37.486228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:61280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.330 [2024-07-25 10:56:37.486244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:42.330 [2024-07-25 10:56:37.486270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:61288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.330 [2024-07-25 10:56:37.486286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:42.330 [2024-07-25 10:56:37.486312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:61296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.330 [2024-07-25 10:56:37.486333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:42.330 [2024-07-25 10:56:37.486389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:61304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.330 [2024-07-25 10:56:37.486404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:42.330 [2024-07-25 10:56:37.486429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:61312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.330 [2024-07-25 10:56:37.486443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:42.330 [2024-07-25 10:56:37.486468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:61320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.330 [2024-07-25 10:56:37.486483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.330 [2024-07-25 10:56:37.486507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:61328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.330 [2024-07-25 10:56:37.486522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:42.330 [2024-07-25 10:56:37.486547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.330 [2024-07-25 10:56:37.486561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:42.331 [2024-07-25 10:56:37.486586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:61824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.331 [2024-07-25 10:56:37.486601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:42.331 [2024-07-25 10:56:37.486632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:61832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.331 [2024-07-25 10:56:37.486647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:42.331 [2024-07-25 10:56:37.486672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:61840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.331 [2024-07-25 10:56:37.486688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:42.331 [2024-07-25 10:56:37.486713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.331 [2024-07-25 10:56:37.486735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:42.331 [2024-07-25 10:56:37.486765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:61856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.331 [2024-07-25 10:56:37.486781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:42.331 [2024-07-25 10:56:37.486807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:61864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.331 [2024-07-25 10:56:37.486821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:42.331 [2024-07-25 10:56:37.486846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:61872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.331 [2024-07-25 10:56:37.486860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:42.331 [2024-07-25 10:56:37.486910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.331 [2024-07-25 10:56:37.486928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:42.331 [2024-07-25 10:56:37.486955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.331 [2024-07-25 10:56:37.486970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:42.331 [2024-07-25 10:56:37.486995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:61896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.331 [2024-07-25 10:56:37.487009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:42.331 [2024-07-25 10:56:37.487034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:61904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.331 [2024-07-25 10:56:37.487049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:42.331 [2024-07-25 10:56:37.487073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:61912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.331 [2024-07-25 10:56:37.487088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:42.331 [2024-07-25 10:56:37.487112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:61344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.331 [2024-07-25 10:56:37.487127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:42.331 [2024-07-25 10:56:37.487152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:61352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.331 [2024-07-25 10:56:37.487175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:42.331 [2024-07-25 10:56:37.487201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:61360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.331 [2024-07-25 10:56:37.487216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:42.331 [2024-07-25 10:56:37.487241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:61368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.331 [2024-07-25 10:56:37.487256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:42.331 [2024-07-25 10:56:37.487281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:61376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.331 [2024-07-25 10:56:37.487296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:42.331 [2024-07-25 10:56:37.487320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:61384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.331 [2024-07-25 10:56:37.487335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:42.331 [2024-07-25 10:56:37.487360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:61392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.331 [2024-07-25 10:56:37.487375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:42.331 [2024-07-25 10:56:37.487400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:61400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.331 [2024-07-25 10:56:37.487420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:42.331 [2024-07-25 10:56:50.970916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:107448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.331 [2024-07-25 10:56:50.971016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:42.331 [2024-07-25 10:56:50.971077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:107456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.331 [2024-07-25 10:56:50.971098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:42.331 [2024-07-25 10:56:50.971120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:107464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.331 [2024-07-25 10:56:50.971134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:42.331 [2024-07-25 10:56:50.971154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:107472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.331 [2024-07-25 10:56:50.971169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:42.331 [2024-07-25 10:56:50.971189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.331 [2024-07-25 10:56:50.971204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:42.331 [2024-07-25 10:56:50.971223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.331 [2024-07-25 10:56:50.971266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:42.331 [2024-07-25 10:56:50.971289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:107496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.331 [2024-07-25 10:56:50.971304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:42.331 [2024-07-25 10:56:50.971324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.331 [2024-07-25 10:56:50.971339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:42.331 [2024-07-25 10:56:50.971359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:106936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.331 [2024-07-25 10:56:50.971373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:42.331 [2024-07-25 10:56:50.971393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:106944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.331 [2024-07-25 10:56:50.971407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:42.331 [2024-07-25 10:56:50.971426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:106952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.331 [2024-07-25 10:56:50.971439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:42.331 [2024-07-25 10:56:50.971458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:106960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.332 [2024-07-25 10:56:50.971472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:42.332 [2024-07-25 10:56:50.971492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:106968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.332 [2024-07-25 10:56:50.971506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:42.332 [2024-07-25 10:56:50.971541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:106976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.332 [2024-07-25 10:56:50.971555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:42.332 [2024-07-25 10:56:50.971576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:106984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.332 [2024-07-25 10:56:50.971590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:42.332 [2024-07-25 10:56:50.971611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:106992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.332 [2024-07-25 10:56:50.971626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:42.332 [2024-07-25 10:56:50.971674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:107512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.332 [2024-07-25 10:56:50.971694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.332 [2024-07-25 10:56:50.971713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:107520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.332 [2024-07-25 10:56:50.971728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.332 [2024-07-25 10:56:50.971753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:107528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.332 [2024-07-25 10:56:50.971768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.332 [2024-07-25 10:56:50.971783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:107536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.332 [2024-07-25 10:56:50.971797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.332 [2024-07-25 10:56:50.971812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:107544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.332 [2024-07-25 10:56:50.971825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.332 [2024-07-25 10:56:50.971840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:107552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.332 [2024-07-25 10:56:50.971853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.332 [2024-07-25 10:56:50.971883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:107560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.332 [2024-07-25 10:56:50.971900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.332 [2024-07-25 10:56:50.971916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:107568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.332 [2024-07-25 10:56:50.971944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.332 [2024-07-25 10:56:50.971959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:107576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.332 [2024-07-25 10:56:50.971972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.332 [2024-07-25 10:56:50.971987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:107584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.332 [2024-07-25 10:56:50.972000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.332 [2024-07-25 10:56:50.972014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:107592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.332 [2024-07-25 10:56:50.972027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.332 [2024-07-25 10:56:50.972042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:107600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.332 [2024-07-25 10:56:50.972055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.332 [2024-07-25 10:56:50.972069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:107608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.332 [2024-07-25 10:56:50.972082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.332 [2024-07-25 10:56:50.972096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:107616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.332 [2024-07-25 10:56:50.972110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.332 [2024-07-25 10:56:50.972124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:107624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.332 [2024-07-25 10:56:50.972145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.332 [2024-07-25 10:56:50.972160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:107632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.332 [2024-07-25 10:56:50.972173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.332 [2024-07-25 10:56:50.972188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.332 [2024-07-25 10:56:50.972201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.332 [2024-07-25 10:56:50.972216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:107648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.332 [2024-07-25 10:56:50.972230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.332 [2024-07-25 10:56:50.972245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:107656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.332 [2024-07-25 10:56:50.972259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.332 [2024-07-25 10:56:50.972273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:107664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.332 [2024-07-25 10:56:50.972286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.332 [2024-07-25 10:56:50.972300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:107000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.332 [2024-07-25 10:56:50.972313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.332 [2024-07-25 10:56:50.972327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:107008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.332 [2024-07-25 10:56:50.972341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.332 [2024-07-25 10:56:50.972355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:107016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.332 [2024-07-25 10:56:50.972368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.332 [2024-07-25 10:56:50.972383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:107024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.332 [2024-07-25 10:56:50.972397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.332 [2024-07-25 10:56:50.972411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:107032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.332 [2024-07-25 10:56:50.972424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.332 [2024-07-25 10:56:50.972438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:107040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.332 [2024-07-25 10:56:50.972451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.332 [2024-07-25 10:56:50.972466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:107048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.332 [2024-07-25 10:56:50.972479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.332 [2024-07-25 10:56:50.972499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:107056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.332 [2024-07-25 10:56:50.972513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.332 [2024-07-25 10:56:50.972527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:107064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.332 [2024-07-25 10:56:50.972540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.332 [2024-07-25 10:56:50.972555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:107072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.332 [2024-07-25 10:56:50.972568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.332 [2024-07-25 10:56:50.972583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:107080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.332 [2024-07-25 10:56:50.972596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.332 [2024-07-25 10:56:50.972610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:107088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.332 [2024-07-25 10:56:50.972624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.332 [2024-07-25 10:56:50.972638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:107096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.333 [2024-07-25 10:56:50.972651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.333 [2024-07-25 10:56:50.972666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:107104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.333 [2024-07-25 10:56:50.972680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.333 [2024-07-25 10:56:50.972694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:107112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.333 [2024-07-25 10:56:50.972708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.333 [2024-07-25 10:56:50.972722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:107120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.333 [2024-07-25 10:56:50.972735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.333 [2024-07-25 10:56:50.972749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.333 [2024-07-25 10:56:50.972763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.333 [2024-07-25 10:56:50.972777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:107680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.333 [2024-07-25 10:56:50.972790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.333 [2024-07-25 10:56:50.972804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:107688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.333 [2024-07-25 10:56:50.972818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.333 [2024-07-25 10:56:50.972832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:107696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.333 [2024-07-25 10:56:50.972850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.333 [2024-07-25 10:56:50.972874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:107704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.333 [2024-07-25 10:56:50.972890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.333 [2024-07-25 10:56:50.972905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:107712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.333 [2024-07-25 10:56:50.972919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.333 [2024-07-25 10:56:50.972933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:107720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.333 [2024-07-25 10:56:50.972946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.333 [2024-07-25 10:56:50.972960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.333 [2024-07-25 10:56:50.972973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.333 [2024-07-25 10:56:50.972988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:107736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.333 [2024-07-25 10:56:50.973001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.333 [2024-07-25 10:56:50.973015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:107744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.333 [2024-07-25 10:56:50.973028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.333 [2024-07-25 10:56:50.973042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:107752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.333 [2024-07-25 10:56:50.973056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.333 [2024-07-25 10:56:50.973070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:107760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.333 [2024-07-25 10:56:50.973083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.333 [2024-07-25 10:56:50.973097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:107128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.333 [2024-07-25 10:56:50.973110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.333 [2024-07-25 10:56:50.973125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:107136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.333 [2024-07-25 10:56:50.973139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.333 [2024-07-25 10:56:50.973153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:107144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.333 [2024-07-25 10:56:50.973166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.333 [2024-07-25 10:56:50.973180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:107152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.333 [2024-07-25 10:56:50.973193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.333 [2024-07-25 10:56:50.973214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:107160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.333 [2024-07-25 10:56:50.973229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.333 [2024-07-25 10:56:50.973243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:107168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.333 [2024-07-25 10:56:50.973257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.333 [2024-07-25 10:56:50.973271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:107176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.333 [2024-07-25 10:56:50.973284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.333 [2024-07-25 10:56:50.973299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:107184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.333 [2024-07-25 10:56:50.973312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.333 [2024-07-25 10:56:50.973326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:107192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.333 [2024-07-25 10:56:50.973339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.333 [2024-07-25 10:56:50.973354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:107200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.333 [2024-07-25 10:56:50.973367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.333 [2024-07-25 10:56:50.973381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:107208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.333 [2024-07-25 10:56:50.973394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.333 [2024-07-25 10:56:50.973408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:107216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.333 [2024-07-25 10:56:50.973421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.333 [2024-07-25 10:56:50.973435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:107224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.333 [2024-07-25 10:56:50.973449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.333 [2024-07-25 10:56:50.973464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.333 [2024-07-25 10:56:50.973477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.333 [2024-07-25 10:56:50.973491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:107240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.333 [2024-07-25 10:56:50.973504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.333 [2024-07-25 10:56:50.973518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:107248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.333 [2024-07-25 10:56:50.973531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.333 [2024-07-25 10:56:50.973545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:107768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.333 [2024-07-25 10:56:50.973564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.333 [2024-07-25 10:56:50.973581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:107776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.333 [2024-07-25 10:56:50.973595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.333 [2024-07-25 10:56:50.973610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:107784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.333 [2024-07-25 10:56:50.973623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.333 [2024-07-25 10:56:50.973637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:107792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.333 [2024-07-25 10:56:50.973650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.333 [2024-07-25 10:56:50.973664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:107800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.334 [2024-07-25 10:56:50.973677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.334 [2024-07-25 10:56:50.973692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:107808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.334 [2024-07-25 10:56:50.973705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.334 [2024-07-25 10:56:50.973719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:107816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.334 [2024-07-25 10:56:50.973732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.334 [2024-07-25 10:56:50.973746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:107824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.334 [2024-07-25 10:56:50.973759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.334 [2024-07-25 10:56:50.973774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:107832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.334 [2024-07-25 10:56:50.973786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.334 [2024-07-25 10:56:50.973801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:107840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.334 [2024-07-25 10:56:50.973814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.334 [2024-07-25 10:56:50.973828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:107848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.334 [2024-07-25 10:56:50.973841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.334 [2024-07-25 10:56:50.973866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:107856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.334 [2024-07-25 10:56:50.973882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.334 [2024-07-25 10:56:50.973896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:107864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.334 [2024-07-25 10:56:50.973910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.334 [2024-07-25 10:56:50.973924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:107872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.334 [2024-07-25 10:56:50.973944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.334 [2024-07-25 10:56:50.973959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:107880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.334 [2024-07-25 10:56:50.973973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.334 [2024-07-25 10:56:50.973987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.334 [2024-07-25 10:56:50.974000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.334 [2024-07-25 10:56:50.974042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:107256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.334 [2024-07-25 10:56:50.974075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.334 [2024-07-25 10:56:50.974092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:107264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.334 [2024-07-25 10:56:50.974107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.334 [2024-07-25 10:56:50.974122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:107272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.334 [2024-07-25 10:56:50.974137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.334 [2024-07-25 10:56:50.974162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:107280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.334 [2024-07-25 10:56:50.974178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.334 [2024-07-25 10:56:50.974194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:107288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.334 [2024-07-25 10:56:50.974208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.334 [2024-07-25 10:56:50.974226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:107296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.334 [2024-07-25 10:56:50.974241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.334 [2024-07-25 10:56:50.974257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.334 [2024-07-25 10:56:50.974271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.334 [2024-07-25 10:56:50.974287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:107312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.334 [2024-07-25 10:56:50.974301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.334 [2024-07-25 10:56:50.974317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:107320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.334 [2024-07-25 10:56:50.974331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.334 [2024-07-25 10:56:50.974362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:107328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.334 [2024-07-25 10:56:50.974391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.334 [2024-07-25 10:56:50.974412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.334 [2024-07-25 10:56:50.974427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.334 [2024-07-25 10:56:50.974441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:107344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.334 [2024-07-25 10:56:50.974455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.334 [2024-07-25 10:56:50.974470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:107352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.334 [2024-07-25 10:56:50.974484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.334 [2024-07-25 10:56:50.974513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:107360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.334 [2024-07-25 10:56:50.974526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.334 [2024-07-25 10:56:50.974541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:107368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.334 [2024-07-25 10:56:50.974554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.334 [2024-07-25 10:56:50.974568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd680 is same with the state(5) to be set 00:18:42.335 [2024-07-25 10:56:50.974585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:42.335 [2024-07-25 10:56:50.974596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:42.335 [2024-07-25 10:56:50.974606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107376 len:8 PRP1 0x0 PRP2 0x0 00:18:42.335 [2024-07-25 10:56:50.974619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.335 [2024-07-25 10:56:50.974633] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:42.335 [2024-07-25 10:56:50.974643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:42.335 [2024-07-25 10:56:50.974653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107896 len:8 PRP1 0x0 PRP2 0x0 00:18:42.335 [2024-07-25 10:56:50.974671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.335 [2024-07-25 10:56:50.974685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:42.335 [2024-07-25 10:56:50.974695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:42.335 [2024-07-25 10:56:50.974705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107904 len:8 PRP1 0x0 PRP2 0x0 00:18:42.335 [2024-07-25 10:56:50.974718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.335 [2024-07-25 10:56:50.974731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:42.335 [2024-07-25 10:56:50.974741] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:42.335 [2024-07-25 10:56:50.974751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107912 len:8 PRP1 0x0 PRP2 0x0 00:18:42.335 [2024-07-25 10:56:50.974763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.335 [2024-07-25 10:56:50.974776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:42.335 [2024-07-25 10:56:50.974792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:42.335 [2024-07-25 10:56:50.974802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107920 len:8 PRP1 0x0 PRP2 0x0 00:18:42.335 [2024-07-25 10:56:50.974815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.335 [2024-07-25 10:56:50.974828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:42.335 [2024-07-25 10:56:50.974838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:42.335 [2024-07-25 10:56:50.974848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107928 len:8 PRP1 0x0 PRP2 0x0 00:18:42.335 [2024-07-25 10:56:50.974861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.335 [2024-07-25 10:56:50.974875] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:42.335 [2024-07-25 10:56:50.974885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:42.335 [2024-07-25 10:56:50.974896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107936 len:8 PRP1 0x0 PRP2 0x0 00:18:42.335 [2024-07-25 10:56:50.974907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.335 [2024-07-25 10:56:50.974921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:42.335 [2024-07-25 10:56:50.974943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:42.335 [2024-07-25 10:56:50.974954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107944 len:8 PRP1 0x0 PRP2 0x0 00:18:42.335 [2024-07-25 10:56:50.974967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.335 [2024-07-25 10:56:50.974981] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:42.335 [2024-07-25 10:56:50.974991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:42.335 [2024-07-25 10:56:50.975001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107952 len:8 PRP1 0x0 PRP2 0x0 00:18:42.335 [2024-07-25 10:56:50.975014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.335 [2024-07-25 10:56:50.975027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:42.335 [2024-07-25 10:56:50.975037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:42.335 [2024-07-25 10:56:50.975046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107384 len:8 PRP1 0x0 PRP2 0x0 00:18:42.335 [2024-07-25 10:56:50.975064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.335 [2024-07-25 10:56:50.975076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:42.335 [2024-07-25 10:56:50.975087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:42.335 [2024-07-25 10:56:50.975097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107392 len:8 PRP1 0x0 PRP2 0x0 00:18:42.335 [2024-07-25 10:56:50.975109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.335 [2024-07-25 10:56:50.975122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:42.335 [2024-07-25 10:56:50.975132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:42.335 [2024-07-25 10:56:50.975142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107400 len:8 PRP1 0x0 PRP2 0x0 00:18:42.335 [2024-07-25 10:56:50.975154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.335 [2024-07-25 10:56:50.975177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:42.335 [2024-07-25 10:56:50.975188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:42.335 [2024-07-25 10:56:50.975198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107408 len:8 PRP1 0x0 PRP2 0x0 00:18:42.335 [2024-07-25 10:56:50.975211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.335 [2024-07-25 10:56:50.975223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:42.335 [2024-07-25 10:56:50.975233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:42.335 [2024-07-25 10:56:50.975243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107416 len:8 PRP1 0x0 PRP2 0x0 00:18:42.335 [2024-07-25 10:56:50.975256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.335 [2024-07-25 10:56:50.975268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:42.335 [2024-07-25 10:56:50.975278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:42.335 [2024-07-25 10:56:50.975288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107424 len:8 PRP1 0x0 PRP2 0x0 00:18:42.335 [2024-07-25 10:56:50.975300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.335 [2024-07-25 10:56:50.975313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:42.335 [2024-07-25 10:56:50.975323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:42.335 [2024-07-25 10:56:50.975333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107432 len:8 PRP1 0x0 PRP2 0x0 00:18:42.335 [2024-07-25 10:56:50.975345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.335 [2024-07-25 10:56:50.975358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:42.335 [2024-07-25 10:56:50.975369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:42.335 [2024-07-25 10:56:50.975379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107440 len:8 PRP1 0x0 PRP2 0x0 00:18:42.335 [2024-07-25 10:56:50.975391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.335 [2024-07-25 10:56:50.975456] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12dd680 was disconnected and freed. reset controller. 00:18:42.335 [2024-07-25 10:56:50.976534] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:42.335 [2024-07-25 10:56:50.976610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.335 [2024-07-25 10:56:50.976631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.335 [2024-07-25 10:56:50.976664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x125f100 (9): Bad file descriptor 00:18:42.335 [2024-07-25 10:56:50.977066] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:42.335 [2024-07-25 10:56:50.977096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125f100 with addr=10.0.0.2, port=4421 00:18:42.335 [2024-07-25 10:56:50.977112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f100 is same with the state(5) to be set 00:18:42.335 [2024-07-25 10:56:50.977142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x125f100 (9): Bad file descriptor 00:18:42.335 [2024-07-25 10:56:50.977183] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:42.335 [2024-07-25 10:56:50.977200] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:42.335 [2024-07-25 10:56:50.977214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:42.335 [2024-07-25 10:56:50.977244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:42.335 [2024-07-25 10:56:50.977261] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:42.335 [2024-07-25 10:57:01.036302] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:42.335 Received shutdown signal, test time was about 55.446039 seconds 00:18:42.335 00:18:42.335 Latency(us) 00:18:42.335 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.335 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:42.335 Verification LBA range: start 0x0 length 0x4000 00:18:42.336 Nvme0n1 : 55.45 7056.67 27.57 0.00 0.00 18105.00 495.24 7015926.69 00:18:42.336 =================================================================================================================== 00:18:42.336 Total : 7056.67 27.57 0.00 0.00 18105.00 495.24 7015926.69 00:18:42.336 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:42.336 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:18:42.336 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:42.336 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:18:42.336 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:42.336 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:18:42.336 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:42.336 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:18:42.336 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:42.336 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:42.336 rmmod nvme_tcp 00:18:42.336 rmmod nvme_fabrics 00:18:42.336 rmmod nvme_keyring 00:18:42.336 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:42.336 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:18:42.336 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:18:42.336 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 80372 ']' 00:18:42.336 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 80372 00:18:42.336 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 80372 ']' 00:18:42.336 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 80372 00:18:42.336 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:18:42.336 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:42.336 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80372 00:18:42.336 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:42.336 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:42.336 killing process with pid 80372 00:18:42.336 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80372' 00:18:42.336 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 80372 00:18:42.336 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 80372 00:18:42.595 10:57:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:42.595 10:57:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:42.595 10:57:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:42.595 10:57:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:42.595 10:57:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:42.595 10:57:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.595 10:57:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:42.595 10:57:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.595 10:57:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:42.595 00:18:42.595 real 1m1.641s 00:18:42.595 user 2m51.434s 00:18:42.595 sys 0m18.045s 00:18:42.595 10:57:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:42.595 10:57:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:42.595 ************************************ 00:18:42.595 END TEST nvmf_host_multipath 00:18:42.595 ************************************ 00:18:42.595 10:57:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:42.595 10:57:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:42.595 10:57:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:42.595 10:57:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.595 ************************************ 00:18:42.595 START TEST nvmf_timeout 00:18:42.595 ************************************ 00:18:42.595 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:42.595 * Looking for test storage... 00:18:42.595 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:42.595 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:42.854 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=bb4b8bd3-cfb4-4368-bf29-91254747069c 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:42.855 Cannot find device "nvmf_tgt_br" 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:42.855 Cannot find device "nvmf_tgt_br2" 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:42.855 Cannot find device "nvmf_tgt_br" 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:42.855 Cannot find device "nvmf_tgt_br2" 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:42.855 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:42.855 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:42.855 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:42.856 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:43.113 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:43.113 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:43.113 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:43.113 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:43.113 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:43.113 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:43.113 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:43.113 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:43.113 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:43.113 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:43.113 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:43.113 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:43.113 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:43.113 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:18:43.113 00:18:43.113 --- 10.0.0.2 ping statistics --- 00:18:43.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.113 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:18:43.113 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:43.113 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:43.113 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:18:43.113 00:18:43.113 --- 10.0.0.3 ping statistics --- 00:18:43.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.113 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:18:43.113 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:43.113 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:43.113 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:18:43.113 00:18:43.113 --- 10.0.0.1 ping statistics --- 00:18:43.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.113 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:18:43.113 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:43.113 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:18:43.113 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:43.113 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:43.113 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:43.113 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:43.113 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:43.113 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:43.113 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:43.113 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:18:43.113 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:43.113 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:43.113 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:43.113 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=81540 00:18:43.113 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:43.113 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 81540 00:18:43.113 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 81540 ']' 00:18:43.113 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.113 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:43.113 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.113 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:43.113 10:57:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:43.113 [2024-07-25 10:57:12.795199] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:43.113 [2024-07-25 10:57:12.795304] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:43.371 [2024-07-25 10:57:12.938755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:43.371 [2024-07-25 10:57:13.060058] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:43.371 [2024-07-25 10:57:13.060136] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:43.371 [2024-07-25 10:57:13.060151] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:43.371 [2024-07-25 10:57:13.060162] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:43.371 [2024-07-25 10:57:13.060171] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:43.371 [2024-07-25 10:57:13.060309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:43.371 [2024-07-25 10:57:13.060335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.630 [2024-07-25 10:57:13.119009] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:44.196 10:57:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:44.196 10:57:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:18:44.196 10:57:13 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:44.196 10:57:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:44.196 10:57:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:44.196 10:57:13 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:44.196 10:57:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:44.196 10:57:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:44.456 [2024-07-25 10:57:14.022885] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:44.456 10:57:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:44.715 Malloc0 00:18:44.715 10:57:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:44.973 10:57:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:45.232 10:57:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:45.491 [2024-07-25 10:57:15.093892] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:45.491 10:57:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=81594 00:18:45.491 10:57:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:45.491 10:57:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 81594 /var/tmp/bdevperf.sock 00:18:45.491 10:57:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 81594 ']' 00:18:45.491 10:57:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:45.491 10:57:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:45.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:45.491 10:57:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:45.491 10:57:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:45.491 10:57:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:45.491 [2024-07-25 10:57:15.166172] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:45.491 [2024-07-25 10:57:15.166248] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81594 ] 00:18:45.749 [2024-07-25 10:57:15.303504] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.749 [2024-07-25 10:57:15.440490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:46.008 [2024-07-25 10:57:15.512027] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:46.575 10:57:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:46.575 10:57:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:18:46.575 10:57:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:46.845 10:57:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:47.104 NVMe0n1 00:18:47.104 10:57:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:47.104 10:57:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=81613 00:18:47.104 10:57:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:18:47.104 Running I/O for 10 seconds... 00:18:48.041 10:57:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:48.315 [2024-07-25 10:57:17.892650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.315 [2024-07-25 10:57:17.892713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.315 [2024-07-25 10:57:17.892736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.315 [2024-07-25 10:57:17.892746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.315 [2024-07-25 10:57:17.892756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.315 [2024-07-25 10:57:17.892781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.315 [2024-07-25 10:57:17.892792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.315 [2024-07-25 10:57:17.892803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.315 [2024-07-25 10:57:17.892813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.315 [2024-07-25 10:57:17.892830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.315 [2024-07-25 10:57:17.892840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.315 [2024-07-25 10:57:17.892849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.315 [2024-07-25 10:57:17.892859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.315 [2024-07-25 10:57:17.892876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.315 [2024-07-25 10:57:17.892887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.315 [2024-07-25 10:57:17.892895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.315 [2024-07-25 10:57:17.892905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.315 [2024-07-25 10:57:17.892913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.315 [2024-07-25 10:57:17.892924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.315 [2024-07-25 10:57:17.892932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.315 [2024-07-25 10:57:17.892942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.315 [2024-07-25 10:57:17.892950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.315 [2024-07-25 10:57:17.892959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.315 [2024-07-25 10:57:17.892968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.315 [2024-07-25 10:57:17.892979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.315 [2024-07-25 10:57:17.892987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.315 [2024-07-25 10:57:17.892996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.315 [2024-07-25 10:57:17.893004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.315 [2024-07-25 10:57:17.893015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:64072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.315 [2024-07-25 10:57:17.893023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.315 [2024-07-25 10:57:17.893032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.315 [2024-07-25 10:57:17.893041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.315 [2024-07-25 10:57:17.893051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:64088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.315 [2024-07-25 10:57:17.893060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.315 [2024-07-25 10:57:17.893070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.315 [2024-07-25 10:57:17.893079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.316 [2024-07-25 10:57:17.893088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.316 [2024-07-25 10:57:17.893102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.316 [2024-07-25 10:57:17.893112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:64112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.316 [2024-07-25 10:57:17.893130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.316 [2024-07-25 10:57:17.893140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.316 [2024-07-25 10:57:17.893148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.316 [2024-07-25 10:57:17.893158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.316 [2024-07-25 10:57:17.893167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.316 [2024-07-25 10:57:17.893186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:64136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.316 [2024-07-25 10:57:17.893194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.316 [2024-07-25 10:57:17.893204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.316 [2024-07-25 10:57:17.893220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.316 [2024-07-25 10:57:17.893230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.316 [2024-07-25 10:57:17.893239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.316 [2024-07-25 10:57:17.893251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.316 [2024-07-25 10:57:17.893259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.316 [2024-07-25 10:57:17.893269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:64168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.316 [2024-07-25 10:57:17.893277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.316 [2024-07-25 10:57:17.893287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.316 [2024-07-25 10:57:17.893296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.316 [2024-07-25 10:57:17.893306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.316 [2024-07-25 10:57:17.893314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.316 [2024-07-25 10:57:17.893324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.316 [2024-07-25 10:57:17.893333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.316 [2024-07-25 10:57:17.893343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:64200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.316 [2024-07-25 10:57:17.893351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.316 [2024-07-25 10:57:17.893361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.316 [2024-07-25 10:57:17.893369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.316 [2024-07-25 10:57:17.893378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.316 [2024-07-25 10:57:17.893387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.316 [2024-07-25 10:57:17.893397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.316 [2024-07-25 10:57:17.893406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.316 [2024-07-25 10:57:17.893416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:64232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.316 [2024-07-25 10:57:17.893440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.316 [2024-07-25 10:57:17.893450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.316 [2024-07-25 10:57:17.893459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.316 [2024-07-25 10:57:17.893469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:64248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.316 [2024-07-25 10:57:17.893477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.316 [2024-07-25 10:57:17.893488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.316 [2024-07-25 10:57:17.893496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.316 [2024-07-25 10:57:17.893506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.316 [2024-07-25 10:57:17.893515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.316 [2024-07-25 10:57:17.893525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.316 [2024-07-25 10:57:17.893533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.316 [2024-07-25 10:57:17.893544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.316 [2024-07-25 10:57:17.893552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.316 [2024-07-25 10:57:17.893562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.316 [2024-07-25 10:57:17.893571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.316 [2024-07-25 10:57:17.893581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.316 [2024-07-25 10:57:17.893590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.316 [2024-07-25 10:57:17.893599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.316 [2024-07-25 10:57:17.893607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.316 [2024-07-25 10:57:17.893617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:64312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.316 [2024-07-25 10:57:17.893626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.316 [2024-07-25 10:57:17.893636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.316 [2024-07-25 10:57:17.893645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.316 [2024-07-25 10:57:17.893656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.316 [2024-07-25 10:57:17.893664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.316 [2024-07-25 10:57:17.893681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.316 [2024-07-25 10:57:17.893689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.316 [2024-07-25 10:57:17.893699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.316 [2024-07-25 10:57:17.893707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.316 [2024-07-25 10:57:17.893718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.316 [2024-07-25 10:57:17.893727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.316 [2024-07-25 10:57:17.893737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:63360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.316 [2024-07-25 10:57:17.893745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.316 [2024-07-25 10:57:17.893755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:63368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.316 [2024-07-25 10:57:17.893778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.316 [2024-07-25 10:57:17.893789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:63376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.316 [2024-07-25 10:57:17.893797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.316 [2024-07-25 10:57:17.893821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:63384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.316 [2024-07-25 10:57:17.893830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.316 [2024-07-25 10:57:17.893839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:63392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.316 [2024-07-25 10:57:17.893847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.316 [2024-07-25 10:57:17.893856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:63400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.316 [2024-07-25 10:57:17.893864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.316 [2024-07-25 10:57:17.893874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:63408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.316 [2024-07-25 10:57:17.893881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.317 [2024-07-25 10:57:17.893891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.317 [2024-07-25 10:57:17.893909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.317 [2024-07-25 10:57:17.893919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.317 [2024-07-25 10:57:17.893927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.317 [2024-07-25 10:57:17.893937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.317 [2024-07-25 10:57:17.893945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.317 [2024-07-25 10:57:17.893954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.317 [2024-07-25 10:57:17.893963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.317 [2024-07-25 10:57:17.893973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:63440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.317 [2024-07-25 10:57:17.893982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.317 [2024-07-25 10:57:17.893991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.317 [2024-07-25 10:57:17.893999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.317 [2024-07-25 10:57:17.894035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:63456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.317 [2024-07-25 10:57:17.894060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.317 [2024-07-25 10:57:17.894070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:63464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.317 [2024-07-25 10:57:17.894079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.317 [2024-07-25 10:57:17.894090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:63472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.317 [2024-07-25 10:57:17.894099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.317 [2024-07-25 10:57:17.894111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.317 [2024-07-25 10:57:17.894121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.317 [2024-07-25 10:57:17.894131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:63488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.317 [2024-07-25 10:57:17.894140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.317 [2024-07-25 10:57:17.894150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:63496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.317 [2024-07-25 10:57:17.894159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.317 [2024-07-25 10:57:17.894170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:63504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.317 [2024-07-25 10:57:17.894179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.317 [2024-07-25 10:57:17.894189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.317 [2024-07-25 10:57:17.894199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.317 [2024-07-25 10:57:17.894209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.317 [2024-07-25 10:57:17.894218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.317 [2024-07-25 10:57:17.894228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:63528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.317 [2024-07-25 10:57:17.894237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.317 [2024-07-25 10:57:17.894247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:63536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.317 [2024-07-25 10:57:17.894256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.317 [2024-07-25 10:57:17.894267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:63544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.317 [2024-07-25 10:57:17.894275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.317 [2024-07-25 10:57:17.894286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:63552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.317 [2024-07-25 10:57:17.894294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.317 [2024-07-25 10:57:17.894307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:63560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.317 [2024-07-25 10:57:17.894315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.317 [2024-07-25 10:57:17.894338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.317 [2024-07-25 10:57:17.894362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.317 [2024-07-25 10:57:17.894372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:63576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.317 [2024-07-25 10:57:17.894380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.317 [2024-07-25 10:57:17.894390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.317 [2024-07-25 10:57:17.894398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.317 [2024-07-25 10:57:17.894408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.317 [2024-07-25 10:57:17.894428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.317 [2024-07-25 10:57:17.894439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.317 [2024-07-25 10:57:17.894449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.317 [2024-07-25 10:57:17.894460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:63600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.317 [2024-07-25 10:57:17.894468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.317 [2024-07-25 10:57:17.894478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.317 [2024-07-25 10:57:17.894502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.317 [2024-07-25 10:57:17.894511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:63616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.317 [2024-07-25 10:57:17.894519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.317 [2024-07-25 10:57:17.894529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.317 [2024-07-25 10:57:17.894537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.317 [2024-07-25 10:57:17.894547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:63632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.317 [2024-07-25 10:57:17.894555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.317 [2024-07-25 10:57:17.894565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.317 [2024-07-25 10:57:17.894573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.317 [2024-07-25 10:57:17.894583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:63640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.317 [2024-07-25 10:57:17.894591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.317 [2024-07-25 10:57:17.894601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:63648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.317 [2024-07-25 10:57:17.894609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.317 [2024-07-25 10:57:17.894619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.317 [2024-07-25 10:57:17.894627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.317 [2024-07-25 10:57:17.894638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.317 [2024-07-25 10:57:17.894646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.317 [2024-07-25 10:57:17.894656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:63672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.317 [2024-07-25 10:57:17.894664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.317 [2024-07-25 10:57:17.894674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:63680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.317 [2024-07-25 10:57:17.894690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.317 [2024-07-25 10:57:17.894700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.317 [2024-07-25 10:57:17.894709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.317 [2024-07-25 10:57:17.894719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.317 [2024-07-25 10:57:17.894727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.317 [2024-07-25 10:57:17.894737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.318 [2024-07-25 10:57:17.894760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.318 [2024-07-25 10:57:17.894770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.318 [2024-07-25 10:57:17.894779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.318 [2024-07-25 10:57:17.894789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.318 [2024-07-25 10:57:17.894797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.318 [2024-07-25 10:57:17.894807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.318 [2024-07-25 10:57:17.894815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.318 [2024-07-25 10:57:17.894824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.318 [2024-07-25 10:57:17.894832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.318 [2024-07-25 10:57:17.894842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.318 [2024-07-25 10:57:17.894850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.318 [2024-07-25 10:57:17.894861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.318 [2024-07-25 10:57:17.894869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.318 [2024-07-25 10:57:17.894879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.318 [2024-07-25 10:57:17.894886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.318 [2024-07-25 10:57:17.894896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:63768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.318 [2024-07-25 10:57:17.894904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.318 [2024-07-25 10:57:17.894914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:63776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.318 [2024-07-25 10:57:17.894922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.318 [2024-07-25 10:57:17.894932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.318 [2024-07-25 10:57:17.894940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.318 [2024-07-25 10:57:17.894958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.318 [2024-07-25 10:57:17.894967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.318 [2024-07-25 10:57:17.894977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.318 [2024-07-25 10:57:17.894985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.318 [2024-07-25 10:57:17.894995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.318 [2024-07-25 10:57:17.895008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.318 [2024-07-25 10:57:17.895018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.318 [2024-07-25 10:57:17.895027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.318 [2024-07-25 10:57:17.895037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:63824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.318 [2024-07-25 10:57:17.895045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.318 [2024-07-25 10:57:17.895055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.318 [2024-07-25 10:57:17.895063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.318 [2024-07-25 10:57:17.895073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.318 [2024-07-25 10:57:17.895081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.318 [2024-07-25 10:57:17.895091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.318 [2024-07-25 10:57:17.895099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.318 [2024-07-25 10:57:17.895110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.318 [2024-07-25 10:57:17.895117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.318 [2024-07-25 10:57:17.895127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.318 [2024-07-25 10:57:17.895135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.318 [2024-07-25 10:57:17.895144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:63872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.318 [2024-07-25 10:57:17.895152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.318 [2024-07-25 10:57:17.895161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.318 [2024-07-25 10:57:17.895170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.318 [2024-07-25 10:57:17.895179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.318 [2024-07-25 10:57:17.895187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.318 [2024-07-25 10:57:17.895197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.318 [2024-07-25 10:57:17.895204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.318 [2024-07-25 10:57:17.895213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:63904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.318 [2024-07-25 10:57:17.895222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.318 [2024-07-25 10:57:17.895231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.318 [2024-07-25 10:57:17.895239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.318 [2024-07-25 10:57:17.895250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.318 [2024-07-25 10:57:17.895258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.318 [2024-07-25 10:57:17.895268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.318 [2024-07-25 10:57:17.895276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.318 [2024-07-25 10:57:17.895286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.318 [2024-07-25 10:57:17.895299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.318 [2024-07-25 10:57:17.895309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.318 [2024-07-25 10:57:17.895317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.318 [2024-07-25 10:57:17.895327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c1b0 is same with the state(5) to be set 00:18:48.318 [2024-07-25 10:57:17.895338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:48.318 [2024-07-25 10:57:17.895345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:48.318 [2024-07-25 10:57:17.895352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63952 len:8 PRP1 0x0 PRP2 0x0 00:18:48.318 [2024-07-25 10:57:17.895360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.318 [2024-07-25 10:57:17.895435] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x218c1b0 was disconnected and freed. reset controller. 00:18:48.318 [2024-07-25 10:57:17.895679] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:48.318 [2024-07-25 10:57:17.895759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211bd40 (9): Bad file descriptor 00:18:48.318 [2024-07-25 10:57:17.895887] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:48.318 [2024-07-25 10:57:17.895908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211bd40 with addr=10.0.0.2, port=4420 00:18:48.318 [2024-07-25 10:57:17.895918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211bd40 is same with the state(5) to be set 00:18:48.318 [2024-07-25 10:57:17.895934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211bd40 (9): Bad file descriptor 00:18:48.318 [2024-07-25 10:57:17.895948] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:48.318 [2024-07-25 10:57:17.895956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:48.318 [2024-07-25 10:57:17.895966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:48.318 [2024-07-25 10:57:17.895984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:48.318 [2024-07-25 10:57:17.895994] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:48.318 10:57:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:18:50.228 [2024-07-25 10:57:19.896320] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:50.228 [2024-07-25 10:57:19.896390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211bd40 with addr=10.0.0.2, port=4420 00:18:50.228 [2024-07-25 10:57:19.896405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211bd40 is same with the state(5) to be set 00:18:50.228 [2024-07-25 10:57:19.896446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211bd40 (9): Bad file descriptor 00:18:50.228 [2024-07-25 10:57:19.896475] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:50.228 [2024-07-25 10:57:19.896487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:50.228 [2024-07-25 10:57:19.896497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:50.228 [2024-07-25 10:57:19.896525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:50.228 [2024-07-25 10:57:19.896536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:50.228 10:57:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:18:50.228 10:57:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:50.228 10:57:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:50.487 10:57:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:18:50.487 10:57:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:18:50.487 10:57:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:50.487 10:57:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:50.746 10:57:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:18:50.746 10:57:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:18:52.668 [2024-07-25 10:57:21.896745] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:52.668 [2024-07-25 10:57:21.896833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211bd40 with addr=10.0.0.2, port=4420 00:18:52.668 [2024-07-25 10:57:21.896848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211bd40 is same with the state(5) to be set 00:18:52.668 [2024-07-25 10:57:21.896888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211bd40 (9): Bad file descriptor 00:18:52.668 [2024-07-25 10:57:21.896907] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:52.668 [2024-07-25 10:57:21.896917] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:52.668 [2024-07-25 10:57:21.896927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:52.668 [2024-07-25 10:57:21.896956] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:52.668 [2024-07-25 10:57:21.896969] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:54.572 [2024-07-25 10:57:23.897089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:54.572 [2024-07-25 10:57:23.897178] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:54.572 [2024-07-25 10:57:23.897189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:54.572 [2024-07-25 10:57:23.897199] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:18:54.572 [2024-07-25 10:57:23.897226] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:55.509 00:18:55.509 Latency(us) 00:18:55.509 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.509 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:55.509 Verification LBA range: start 0x0 length 0x4000 00:18:55.509 NVMe0n1 : 8.13 973.90 3.80 15.74 0.00 129157.03 3053.38 7015926.69 00:18:55.509 =================================================================================================================== 00:18:55.509 Total : 973.90 3.80 15.74 0.00 129157.03 3053.38 7015926.69 00:18:55.509 0 00:18:55.768 10:57:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:18:55.768 10:57:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:55.768 10:57:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:56.035 10:57:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:18:56.035 10:57:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:18:56.035 10:57:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:56.035 10:57:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:56.299 10:57:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:18:56.299 10:57:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 81613 00:18:56.299 10:57:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 81594 00:18:56.299 10:57:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 81594 ']' 00:18:56.299 10:57:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 81594 00:18:56.299 10:57:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:18:56.299 10:57:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:56.299 10:57:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81594 00:18:56.299 killing process with pid 81594 00:18:56.299 Received shutdown signal, test time was about 9.264876 seconds 00:18:56.299 00:18:56.299 Latency(us) 00:18:56.299 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:56.299 =================================================================================================================== 00:18:56.299 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:56.299 10:57:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:56.299 10:57:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:56.299 10:57:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81594' 00:18:56.299 10:57:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 81594 00:18:56.299 10:57:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 81594 00:18:56.866 10:57:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:56.866 [2024-07-25 10:57:26.561961] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:56.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:56.866 10:57:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=81735 00:18:56.866 10:57:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:56.866 10:57:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 81735 /var/tmp/bdevperf.sock 00:18:56.866 10:57:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 81735 ']' 00:18:56.866 10:57:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:56.866 10:57:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:56.866 10:57:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:56.866 10:57:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:56.866 10:57:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:57.125 [2024-07-25 10:57:26.624164] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:57.125 [2024-07-25 10:57:26.624263] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81735 ] 00:18:57.125 [2024-07-25 10:57:26.759927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.384 [2024-07-25 10:57:26.875206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:57.384 [2024-07-25 10:57:26.946676] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:57.951 10:57:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:57.951 10:57:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:18:57.951 10:57:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:58.210 10:57:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:18:58.469 NVMe0n1 00:18:58.469 10:57:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=81753 00:18:58.469 10:57:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:58.469 10:57:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:18:58.469 Running I/O for 10 seconds... 00:18:59.406 10:57:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:59.667 [2024-07-25 10:57:29.217473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.667 [2024-07-25 10:57:29.217548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.667 [2024-07-25 10:57:29.217570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.667 [2024-07-25 10:57:29.217580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.667 [2024-07-25 10:57:29.217591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.667 [2024-07-25 10:57:29.217600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.667 [2024-07-25 10:57:29.217611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.667 [2024-07-25 10:57:29.217619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.667 [2024-07-25 10:57:29.217629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.667 [2024-07-25 10:57:29.217638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.667 [2024-07-25 10:57:29.217649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.668 [2024-07-25 10:57:29.217657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.668 [2024-07-25 10:57:29.217666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.668 [2024-07-25 10:57:29.217675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.668 [2024-07-25 10:57:29.217684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.668 [2024-07-25 10:57:29.217693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.668 [2024-07-25 10:57:29.217703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.668 [2024-07-25 10:57:29.217711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.668 [2024-07-25 10:57:29.217721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.668 [2024-07-25 10:57:29.217730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.668 [2024-07-25 10:57:29.217740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.668 [2024-07-25 10:57:29.217748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.668 [2024-07-25 10:57:29.217759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.668 [2024-07-25 10:57:29.217768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.668 [2024-07-25 10:57:29.217778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.668 [2024-07-25 10:57:29.217786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.668 [2024-07-25 10:57:29.217807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.668 [2024-07-25 10:57:29.217815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.668 [2024-07-25 10:57:29.217832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.668 [2024-07-25 10:57:29.217840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.668 [2024-07-25 10:57:29.217864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.668 [2024-07-25 10:57:29.217885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.668 [2024-07-25 10:57:29.217895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.668 [2024-07-25 10:57:29.217903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.668 [2024-07-25 10:57:29.217916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.668 [2024-07-25 10:57:29.217924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.668 [2024-07-25 10:57:29.217935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.668 [2024-07-25 10:57:29.217947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.668 [2024-07-25 10:57:29.217957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.668 [2024-07-25 10:57:29.217966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.668 [2024-07-25 10:57:29.217976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.668 [2024-07-25 10:57:29.217984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.668 [2024-07-25 10:57:29.217995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.668 [2024-07-25 10:57:29.218012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.668 [2024-07-25 10:57:29.218040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.668 [2024-07-25 10:57:29.218049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.668 [2024-07-25 10:57:29.218060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.668 [2024-07-25 10:57:29.218069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.668 [2024-07-25 10:57:29.218080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.668 [2024-07-25 10:57:29.218090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.668 [2024-07-25 10:57:29.218100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.668 [2024-07-25 10:57:29.218109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.668 [2024-07-25 10:57:29.218120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.668 [2024-07-25 10:57:29.218128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.668 [2024-07-25 10:57:29.218139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.668 [2024-07-25 10:57:29.218148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.668 [2024-07-25 10:57:29.218158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.668 [2024-07-25 10:57:29.218166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.668 [2024-07-25 10:57:29.218177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.668 [2024-07-25 10:57:29.218186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.668 [2024-07-25 10:57:29.218197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.668 [2024-07-25 10:57:29.218205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.668 [2024-07-25 10:57:29.218216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.668 [2024-07-25 10:57:29.218232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.668 [2024-07-25 10:57:29.218243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.668 [2024-07-25 10:57:29.218251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.668 [2024-07-25 10:57:29.218263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.668 [2024-07-25 10:57:29.218271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.668 [2024-07-25 10:57:29.218282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.668 [2024-07-25 10:57:29.218291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.668 [2024-07-25 10:57:29.218302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.668 [2024-07-25 10:57:29.218311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.668 [2024-07-25 10:57:29.218321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.668 [2024-07-25 10:57:29.218346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.668 [2024-07-25 10:57:29.218372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.668 [2024-07-25 10:57:29.218380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.668 [2024-07-25 10:57:29.218390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.668 [2024-07-25 10:57:29.218398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.668 [2024-07-25 10:57:29.218408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.668 [2024-07-25 10:57:29.218416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.668 [2024-07-25 10:57:29.218426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.668 [2024-07-25 10:57:29.218434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.668 [2024-07-25 10:57:29.218444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.668 [2024-07-25 10:57:29.218452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.668 [2024-07-25 10:57:29.218461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.668 [2024-07-25 10:57:29.218484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.668 [2024-07-25 10:57:29.218494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.668 [2024-07-25 10:57:29.218501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.668 [2024-07-25 10:57:29.218513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.669 [2024-07-25 10:57:29.218521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.669 [2024-07-25 10:57:29.218531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.669 [2024-07-25 10:57:29.218540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.669 [2024-07-25 10:57:29.218549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.669 [2024-07-25 10:57:29.218557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.669 [2024-07-25 10:57:29.218566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.669 [2024-07-25 10:57:29.218574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.669 [2024-07-25 10:57:29.218584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.669 [2024-07-25 10:57:29.218592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.669 [2024-07-25 10:57:29.218602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.669 [2024-07-25 10:57:29.218610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.669 [2024-07-25 10:57:29.218620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.669 [2024-07-25 10:57:29.218628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.669 [2024-07-25 10:57:29.218638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.669 [2024-07-25 10:57:29.218646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.669 [2024-07-25 10:57:29.218656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.669 [2024-07-25 10:57:29.218664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.669 [2024-07-25 10:57:29.218674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.669 [2024-07-25 10:57:29.218682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.669 [2024-07-25 10:57:29.218691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.669 [2024-07-25 10:57:29.218699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.669 [2024-07-25 10:57:29.218708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.669 [2024-07-25 10:57:29.218716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.669 [2024-07-25 10:57:29.218725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.669 [2024-07-25 10:57:29.218733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.669 [2024-07-25 10:57:29.218743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.669 [2024-07-25 10:57:29.218750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.669 [2024-07-25 10:57:29.218760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.669 [2024-07-25 10:57:29.218770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.669 [2024-07-25 10:57:29.218780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.669 [2024-07-25 10:57:29.218788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.669 [2024-07-25 10:57:29.218798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.669 [2024-07-25 10:57:29.218806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.669 [2024-07-25 10:57:29.218816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.669 [2024-07-25 10:57:29.218825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.669 [2024-07-25 10:57:29.218835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.669 [2024-07-25 10:57:29.218843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.669 [2024-07-25 10:57:29.218853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.669 [2024-07-25 10:57:29.218861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.669 [2024-07-25 10:57:29.218871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.669 [2024-07-25 10:57:29.218880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.669 [2024-07-25 10:57:29.218891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.669 [2024-07-25 10:57:29.218899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.669 [2024-07-25 10:57:29.218909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.669 [2024-07-25 10:57:29.218930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.669 [2024-07-25 10:57:29.218941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.669 [2024-07-25 10:57:29.218949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.669 [2024-07-25 10:57:29.218959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.669 [2024-07-25 10:57:29.218967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.669 [2024-07-25 10:57:29.218976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.669 [2024-07-25 10:57:29.218984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.669 [2024-07-25 10:57:29.218994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.669 [2024-07-25 10:57:29.219002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.669 [2024-07-25 10:57:29.219012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.669 [2024-07-25 10:57:29.219020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.669 [2024-07-25 10:57:29.219030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:78392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.669 [2024-07-25 10:57:29.219038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.669 [2024-07-25 10:57:29.219048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.669 [2024-07-25 10:57:29.219056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.669 [2024-07-25 10:57:29.219065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:78408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.669 [2024-07-25 10:57:29.219073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.669 [2024-07-25 10:57:29.219083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.669 [2024-07-25 10:57:29.219091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.669 [2024-07-25 10:57:29.219103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.669 [2024-07-25 10:57:29.219112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.669 [2024-07-25 10:57:29.219122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.669 [2024-07-25 10:57:29.219130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.669 [2024-07-25 10:57:29.219140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.669 [2024-07-25 10:57:29.219149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.669 [2024-07-25 10:57:29.219159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.669 [2024-07-25 10:57:29.219168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.669 [2024-07-25 10:57:29.219178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.669 [2024-07-25 10:57:29.219186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.669 [2024-07-25 10:57:29.219197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.669 [2024-07-25 10:57:29.219205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.669 [2024-07-25 10:57:29.219216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.669 [2024-07-25 10:57:29.219224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.669 [2024-07-25 10:57:29.219234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.669 [2024-07-25 10:57:29.219243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.670 [2024-07-25 10:57:29.219253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.670 [2024-07-25 10:57:29.219261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.670 [2024-07-25 10:57:29.219271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.670 [2024-07-25 10:57:29.219279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.670 [2024-07-25 10:57:29.219289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.670 [2024-07-25 10:57:29.219297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.670 [2024-07-25 10:57:29.219307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.670 [2024-07-25 10:57:29.219315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.670 [2024-07-25 10:57:29.219325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.670 [2024-07-25 10:57:29.219333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.670 [2024-07-25 10:57:29.219342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.670 [2024-07-25 10:57:29.219351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.670 [2024-07-25 10:57:29.219361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.670 [2024-07-25 10:57:29.219369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.670 [2024-07-25 10:57:29.219379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.670 [2024-07-25 10:57:29.219387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.670 [2024-07-25 10:57:29.219397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.670 [2024-07-25 10:57:29.219414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.670 [2024-07-25 10:57:29.219424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.670 [2024-07-25 10:57:29.219432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.670 [2024-07-25 10:57:29.219442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.670 [2024-07-25 10:57:29.219451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.670 [2024-07-25 10:57:29.219461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.670 [2024-07-25 10:57:29.219469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.670 [2024-07-25 10:57:29.219479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.670 [2024-07-25 10:57:29.219487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.670 [2024-07-25 10:57:29.219505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:78912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.670 [2024-07-25 10:57:29.219513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.670 [2024-07-25 10:57:29.219523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.670 [2024-07-25 10:57:29.219531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.670 [2024-07-25 10:57:29.219541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.670 [2024-07-25 10:57:29.219549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.670 [2024-07-25 10:57:29.219559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.670 [2024-07-25 10:57:29.219567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.670 [2024-07-25 10:57:29.219576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.670 [2024-07-25 10:57:29.219585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.670 [2024-07-25 10:57:29.219594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.670 [2024-07-25 10:57:29.219602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.670 [2024-07-25 10:57:29.219613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.670 [2024-07-25 10:57:29.219621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.670 [2024-07-25 10:57:29.219631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.670 [2024-07-25 10:57:29.219640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.670 [2024-07-25 10:57:29.219650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.670 [2024-07-25 10:57:29.219658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.670 [2024-07-25 10:57:29.219668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.670 [2024-07-25 10:57:29.219676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.670 [2024-07-25 10:57:29.219686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.670 [2024-07-25 10:57:29.219694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.670 [2024-07-25 10:57:29.219704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.670 [2024-07-25 10:57:29.219717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.670 [2024-07-25 10:57:29.219727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.670 [2024-07-25 10:57:29.219735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.670 [2024-07-25 10:57:29.219745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.670 [2024-07-25 10:57:29.219753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.670 [2024-07-25 10:57:29.219762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc41b0 is same with the state(5) to be set 00:18:59.670 [2024-07-25 10:57:29.219773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:59.670 [2024-07-25 10:57:29.219780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:59.670 [2024-07-25 10:57:29.219787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78512 len:8 PRP1 0x0 PRP2 0x0 00:18:59.670 [2024-07-25 10:57:29.219801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.670 [2024-07-25 10:57:29.219811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:59.670 [2024-07-25 10:57:29.219818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:59.670 [2024-07-25 10:57:29.219825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78968 len:8 PRP1 0x0 PRP2 0x0 00:18:59.670 [2024-07-25 10:57:29.219833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.670 [2024-07-25 10:57:29.219843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:59.670 [2024-07-25 10:57:29.219857] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:59.670 [2024-07-25 10:57:29.219866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78976 len:8 PRP1 0x0 PRP2 0x0 00:18:59.670 [2024-07-25 10:57:29.219874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.670 [2024-07-25 10:57:29.219883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:59.670 [2024-07-25 10:57:29.219889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:59.670 [2024-07-25 10:57:29.219896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78984 len:8 PRP1 0x0 PRP2 0x0 00:18:59.670 [2024-07-25 10:57:29.219905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.670 [2024-07-25 10:57:29.219913] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:59.670 [2024-07-25 10:57:29.219920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:59.670 [2024-07-25 10:57:29.219927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78992 len:8 PRP1 0x0 PRP2 0x0 00:18:59.670 [2024-07-25 10:57:29.219935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.670 [2024-07-25 10:57:29.219943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:59.670 [2024-07-25 10:57:29.219949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:59.670 [2024-07-25 10:57:29.219956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79000 len:8 PRP1 0x0 PRP2 0x0 00:18:59.670 [2024-07-25 10:57:29.219963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.670 [2024-07-25 10:57:29.219971] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:59.670 [2024-07-25 10:57:29.219977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:59.670 [2024-07-25 10:57:29.219989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79008 len:8 PRP1 0x0 PRP2 0x0 00:18:59.670 [2024-07-25 10:57:29.219997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.671 [2024-07-25 10:57:29.220005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:59.671 [2024-07-25 10:57:29.220012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:59.671 [2024-07-25 10:57:29.220019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79016 len:8 PRP1 0x0 PRP2 0x0 00:18:59.671 [2024-07-25 10:57:29.220027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.671 [2024-07-25 10:57:29.220035] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:59.671 [2024-07-25 10:57:29.220041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:59.671 [2024-07-25 10:57:29.220048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79024 len:8 PRP1 0x0 PRP2 0x0 00:18:59.671 [2024-07-25 10:57:29.220060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.671 [2024-07-25 10:57:29.220069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:59.671 [2024-07-25 10:57:29.220075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:59.671 [2024-07-25 10:57:29.220082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79032 len:8 PRP1 0x0 PRP2 0x0 00:18:59.671 [2024-07-25 10:57:29.220090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.671 [2024-07-25 10:57:29.220098] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:59.671 [2024-07-25 10:57:29.220104] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:59.671 [2024-07-25 10:57:29.220111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79040 len:8 PRP1 0x0 PRP2 0x0 00:18:59.671 [2024-07-25 10:57:29.220119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.671 [2024-07-25 10:57:29.220128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:59.671 [2024-07-25 10:57:29.220134] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:59.671 [2024-07-25 10:57:29.220140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79048 len:8 PRP1 0x0 PRP2 0x0 00:18:59.671 [2024-07-25 10:57:29.220148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.671 [2024-07-25 10:57:29.220156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:59.671 [2024-07-25 10:57:29.220162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:59.671 [2024-07-25 10:57:29.220168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79056 len:8 PRP1 0x0 PRP2 0x0 00:18:59.671 [2024-07-25 10:57:29.220178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.671 [2024-07-25 10:57:29.220187] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:59.671 [2024-07-25 10:57:29.220193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:59.671 [2024-07-25 10:57:29.220200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79064 len:8 PRP1 0x0 PRP2 0x0 00:18:59.671 [2024-07-25 10:57:29.220208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.671 [2024-07-25 10:57:29.220217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:59.671 [2024-07-25 10:57:29.220223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:59.671 [2024-07-25 10:57:29.220235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79072 len:8 PRP1 0x0 PRP2 0x0 00:18:59.671 [2024-07-25 10:57:29.220243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.671 [2024-07-25 10:57:29.220252] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:59.671 [2024-07-25 10:57:29.220258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:59.671 [2024-07-25 10:57:29.220265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79080 len:8 PRP1 0x0 PRP2 0x0 00:18:59.671 [2024-07-25 10:57:29.220273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.671 [2024-07-25 10:57:29.220282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:59.671 [2024-07-25 10:57:29.220288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:59.671 [2024-07-25 10:57:29.220295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79088 len:8 PRP1 0x0 PRP2 0x0 00:18:59.671 [2024-07-25 10:57:29.220307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.671 [2024-07-25 10:57:29.220365] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1fc41b0 was disconnected and freed. reset controller. 00:18:59.671 [2024-07-25 10:57:29.220440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:59.671 [2024-07-25 10:57:29.220462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.671 [2024-07-25 10:57:29.220473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:59.671 [2024-07-25 10:57:29.220481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.671 10:57:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:18:59.671 [2024-07-25 10:57:29.240534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:59.671 [2024-07-25 10:57:29.240584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.671 [2024-07-25 10:57:29.240600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:59.671 [2024-07-25 10:57:29.240612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.671 [2024-07-25 10:57:29.240624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f53d40 is same with the state(5) to be set 00:18:59.671 [2024-07-25 10:57:29.240953] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:59.671 [2024-07-25 10:57:29.240993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f53d40 (9): Bad file descriptor 00:18:59.671 [2024-07-25 10:57:29.241138] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:59.671 [2024-07-25 10:57:29.241173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f53d40 with addr=10.0.0.2, port=4420 00:18:59.671 [2024-07-25 10:57:29.241188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f53d40 is same with the state(5) to be set 00:18:59.671 [2024-07-25 10:57:29.241233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f53d40 (9): Bad file descriptor 00:18:59.671 [2024-07-25 10:57:29.241254] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:59.671 [2024-07-25 10:57:29.241266] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:59.671 [2024-07-25 10:57:29.241279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:59.671 [2024-07-25 10:57:29.241304] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:59.671 [2024-07-25 10:57:29.241317] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:00.609 [2024-07-25 10:57:30.241479] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:00.610 [2024-07-25 10:57:30.241556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f53d40 with addr=10.0.0.2, port=4420 00:19:00.610 [2024-07-25 10:57:30.241571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f53d40 is same with the state(5) to be set 00:19:00.610 [2024-07-25 10:57:30.241595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f53d40 (9): Bad file descriptor 00:19:00.610 [2024-07-25 10:57:30.241614] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:00.610 [2024-07-25 10:57:30.241622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:00.610 [2024-07-25 10:57:30.241633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:00.610 [2024-07-25 10:57:30.241659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:00.610 [2024-07-25 10:57:30.241670] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:00.610 10:57:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:00.868 [2024-07-25 10:57:30.442660] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:00.868 10:57:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 81753 00:19:01.804 [2024-07-25 10:57:31.259345] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:08.376 00:19:08.376 Latency(us) 00:19:08.376 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.377 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:08.377 Verification LBA range: start 0x0 length 0x4000 00:19:08.377 NVMe0n1 : 10.01 6294.42 24.59 0.00 0.00 20291.82 1251.14 3050402.91 00:19:08.377 =================================================================================================================== 00:19:08.377 Total : 6294.42 24.59 0.00 0.00 20291.82 1251.14 3050402.91 00:19:08.377 0 00:19:08.635 10:57:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=81862 00:19:08.635 10:57:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:08.635 10:57:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:19:08.635 Running I/O for 10 seconds... 00:19:09.570 10:57:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:09.832 [2024-07-25 10:57:39.381500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.832 [2024-07-25 10:57:39.381565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.832 [2024-07-25 10:57:39.381588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.832 [2024-07-25 10:57:39.381598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.832 [2024-07-25 10:57:39.381610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.833 [2024-07-25 10:57:39.381619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.833 [2024-07-25 10:57:39.381631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.833 [2024-07-25 10:57:39.381640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.833 [2024-07-25 10:57:39.381651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.833 [2024-07-25 10:57:39.381660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.833 [2024-07-25 10:57:39.381671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.833 [2024-07-25 10:57:39.381683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.833 [2024-07-25 10:57:39.381694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.833 [2024-07-25 10:57:39.381707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.833 [2024-07-25 10:57:39.381718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.833 [2024-07-25 10:57:39.381735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.833 [2024-07-25 10:57:39.381758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.833 [2024-07-25 10:57:39.381767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.833 [2024-07-25 10:57:39.381778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.833 [2024-07-25 10:57:39.381787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.833 [2024-07-25 10:57:39.381798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.833 [2024-07-25 10:57:39.381806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.833 [2024-07-25 10:57:39.381817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.833 [2024-07-25 10:57:39.381826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.833 [2024-07-25 10:57:39.381838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.833 [2024-07-25 10:57:39.381847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.833 [2024-07-25 10:57:39.381894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.833 [2024-07-25 10:57:39.381904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.833 [2024-07-25 10:57:39.381915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.833 [2024-07-25 10:57:39.381924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.833 [2024-07-25 10:57:39.381935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.833 [2024-07-25 10:57:39.381945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.833 [2024-07-25 10:57:39.381956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.833 [2024-07-25 10:57:39.381965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.833 [2024-07-25 10:57:39.381991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.833 [2024-07-25 10:57:39.382010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.833 [2024-07-25 10:57:39.382039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.833 [2024-07-25 10:57:39.382049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.833 [2024-07-25 10:57:39.382060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.833 [2024-07-25 10:57:39.382069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.833 [2024-07-25 10:57:39.382081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.833 [2024-07-25 10:57:39.382091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.833 [2024-07-25 10:57:39.382102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.833 [2024-07-25 10:57:39.382112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.833 [2024-07-25 10:57:39.382124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.833 [2024-07-25 10:57:39.382133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.833 [2024-07-25 10:57:39.382145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.833 [2024-07-25 10:57:39.382155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.833 [2024-07-25 10:57:39.382166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.833 [2024-07-25 10:57:39.382174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.833 [2024-07-25 10:57:39.382185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.833 [2024-07-25 10:57:39.382194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.833 [2024-07-25 10:57:39.382206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.833 [2024-07-25 10:57:39.382216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.833 [2024-07-25 10:57:39.382227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.833 [2024-07-25 10:57:39.382236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.833 [2024-07-25 10:57:39.382247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.833 [2024-07-25 10:57:39.382257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.833 [2024-07-25 10:57:39.382268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.833 [2024-07-25 10:57:39.382288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.833 [2024-07-25 10:57:39.382299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.833 [2024-07-25 10:57:39.382308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.833 [2024-07-25 10:57:39.382319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.833 [2024-07-25 10:57:39.382328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.833 [2024-07-25 10:57:39.382338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.833 [2024-07-25 10:57:39.382347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.833 [2024-07-25 10:57:39.382359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.833 [2024-07-25 10:57:39.382369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.833 [2024-07-25 10:57:39.382380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.833 [2024-07-25 10:57:39.382389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.833 [2024-07-25 10:57:39.382400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.833 [2024-07-25 10:57:39.382409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.833 [2024-07-25 10:57:39.382420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.833 [2024-07-25 10:57:39.382428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.833 [2024-07-25 10:57:39.382454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.833 [2024-07-25 10:57:39.382463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.833 [2024-07-25 10:57:39.382474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.833 [2024-07-25 10:57:39.382484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.833 [2024-07-25 10:57:39.382495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.833 [2024-07-25 10:57:39.382504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.833 [2024-07-25 10:57:39.382514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.833 [2024-07-25 10:57:39.382523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.833 [2024-07-25 10:57:39.382534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.834 [2024-07-25 10:57:39.382543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.834 [2024-07-25 10:57:39.382554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.834 [2024-07-25 10:57:39.382563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.834 [2024-07-25 10:57:39.382574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.834 [2024-07-25 10:57:39.382583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.834 [2024-07-25 10:57:39.382594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.834 [2024-07-25 10:57:39.382602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.834 [2024-07-25 10:57:39.382613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.834 [2024-07-25 10:57:39.382621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.834 [2024-07-25 10:57:39.382632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.834 [2024-07-25 10:57:39.382640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.834 [2024-07-25 10:57:39.382651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.834 [2024-07-25 10:57:39.382659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.834 [2024-07-25 10:57:39.382670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.834 [2024-07-25 10:57:39.382678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.834 [2024-07-25 10:57:39.382690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.834 [2024-07-25 10:57:39.382699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.834 [2024-07-25 10:57:39.382710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.834 [2024-07-25 10:57:39.382719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.834 [2024-07-25 10:57:39.382730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.834 [2024-07-25 10:57:39.382739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.834 [2024-07-25 10:57:39.382750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.834 [2024-07-25 10:57:39.382759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.834 [2024-07-25 10:57:39.382770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.834 [2024-07-25 10:57:39.382778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.834 [2024-07-25 10:57:39.382790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.834 [2024-07-25 10:57:39.382799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.834 [2024-07-25 10:57:39.382810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.834 [2024-07-25 10:57:39.382818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.834 [2024-07-25 10:57:39.382829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.834 [2024-07-25 10:57:39.382838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.834 [2024-07-25 10:57:39.382849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.834 [2024-07-25 10:57:39.382858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.834 [2024-07-25 10:57:39.382868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.834 [2024-07-25 10:57:39.382879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.834 [2024-07-25 10:57:39.382901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.834 [2024-07-25 10:57:39.382910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.834 [2024-07-25 10:57:39.382921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.834 [2024-07-25 10:57:39.382930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.834 [2024-07-25 10:57:39.382941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.834 [2024-07-25 10:57:39.382949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.834 [2024-07-25 10:57:39.382960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.834 [2024-07-25 10:57:39.382969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.834 [2024-07-25 10:57:39.382980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.834 [2024-07-25 10:57:39.382988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.834 [2024-07-25 10:57:39.383000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.834 [2024-07-25 10:57:39.383008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.834 [2024-07-25 10:57:39.383020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.834 [2024-07-25 10:57:39.383029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.834 [2024-07-25 10:57:39.383040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.834 [2024-07-25 10:57:39.383049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.834 [2024-07-25 10:57:39.383059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.834 [2024-07-25 10:57:39.383068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.834 [2024-07-25 10:57:39.383078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.834 [2024-07-25 10:57:39.383087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.834 [2024-07-25 10:57:39.383097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.834 [2024-07-25 10:57:39.383106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.834 [2024-07-25 10:57:39.383116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.834 [2024-07-25 10:57:39.383125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.834 [2024-07-25 10:57:39.383136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.834 [2024-07-25 10:57:39.383144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.834 [2024-07-25 10:57:39.383156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.834 [2024-07-25 10:57:39.383164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.834 [2024-07-25 10:57:39.383175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.834 [2024-07-25 10:57:39.383183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.834 [2024-07-25 10:57:39.383194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.834 [2024-07-25 10:57:39.383202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.834 [2024-07-25 10:57:39.383213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.834 [2024-07-25 10:57:39.383221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.834 [2024-07-25 10:57:39.383231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.834 [2024-07-25 10:57:39.383240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.834 [2024-07-25 10:57:39.383250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.834 [2024-07-25 10:57:39.383259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.834 [2024-07-25 10:57:39.383270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.834 [2024-07-25 10:57:39.383278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.834 [2024-07-25 10:57:39.383289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.834 [2024-07-25 10:57:39.383298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.834 [2024-07-25 10:57:39.383310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.834 [2024-07-25 10:57:39.383328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.835 [2024-07-25 10:57:39.383341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.835 [2024-07-25 10:57:39.383350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.835 [2024-07-25 10:57:39.383361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.835 [2024-07-25 10:57:39.383369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.835 [2024-07-25 10:57:39.383380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.835 [2024-07-25 10:57:39.383390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.835 [2024-07-25 10:57:39.383401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.835 [2024-07-25 10:57:39.383409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.835 [2024-07-25 10:57:39.383420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.835 [2024-07-25 10:57:39.383429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.835 [2024-07-25 10:57:39.383440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.835 [2024-07-25 10:57:39.383449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.835 [2024-07-25 10:57:39.383460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.835 [2024-07-25 10:57:39.383468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.835 [2024-07-25 10:57:39.383478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.835 [2024-07-25 10:57:39.383486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.835 [2024-07-25 10:57:39.383497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.835 [2024-07-25 10:57:39.383505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.835 [2024-07-25 10:57:39.383515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.835 [2024-07-25 10:57:39.383524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.835 [2024-07-25 10:57:39.383534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.835 [2024-07-25 10:57:39.383542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.835 [2024-07-25 10:57:39.383553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.835 [2024-07-25 10:57:39.383561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.835 [2024-07-25 10:57:39.383571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.835 [2024-07-25 10:57:39.383581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.835 [2024-07-25 10:57:39.383593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.835 [2024-07-25 10:57:39.383601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.835 [2024-07-25 10:57:39.383612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.835 [2024-07-25 10:57:39.383621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.835 [2024-07-25 10:57:39.383642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.835 [2024-07-25 10:57:39.383652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.835 [2024-07-25 10:57:39.383664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.835 [2024-07-25 10:57:39.383673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.835 [2024-07-25 10:57:39.383684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.835 [2024-07-25 10:57:39.383693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.835 [2024-07-25 10:57:39.383704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.835 [2024-07-25 10:57:39.383712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.835 [2024-07-25 10:57:39.383723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.835 [2024-07-25 10:57:39.383732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.835 [2024-07-25 10:57:39.383742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.835 [2024-07-25 10:57:39.383751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.835 [2024-07-25 10:57:39.383761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.835 [2024-07-25 10:57:39.383770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.835 [2024-07-25 10:57:39.383781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.835 [2024-07-25 10:57:39.383789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.835 [2024-07-25 10:57:39.383799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.835 [2024-07-25 10:57:39.383808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.835 [2024-07-25 10:57:39.383819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.835 [2024-07-25 10:57:39.383827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.835 [2024-07-25 10:57:39.383838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.835 [2024-07-25 10:57:39.383846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.835 [2024-07-25 10:57:39.383869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.835 [2024-07-25 10:57:39.383879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.835 [2024-07-25 10:57:39.383889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.835 [2024-07-25 10:57:39.383898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.835 [2024-07-25 10:57:39.383909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.835 [2024-07-25 10:57:39.383918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.835 [2024-07-25 10:57:39.383929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.835 [2024-07-25 10:57:39.383937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.835 [2024-07-25 10:57:39.383948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc3700 is same with the state(5) to be set 00:19:09.835 [2024-07-25 10:57:39.383959] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.835 [2024-07-25 10:57:39.383972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.835 [2024-07-25 10:57:39.383980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78392 len:8 PRP1 0x0 PRP2 0x0 00:19:09.835 [2024-07-25 10:57:39.383990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.835 [2024-07-25 10:57:39.384000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.835 [2024-07-25 10:57:39.384008] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.835 [2024-07-25 10:57:39.384016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78848 len:8 PRP1 0x0 PRP2 0x0 00:19:09.835 [2024-07-25 10:57:39.384024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.835 [2024-07-25 10:57:39.384033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.835 [2024-07-25 10:57:39.384040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.835 [2024-07-25 10:57:39.384048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78856 len:8 PRP1 0x0 PRP2 0x0 00:19:09.835 [2024-07-25 10:57:39.384056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.835 [2024-07-25 10:57:39.384065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.835 [2024-07-25 10:57:39.384071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.835 [2024-07-25 10:57:39.384079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78864 len:8 PRP1 0x0 PRP2 0x0 00:19:09.835 [2024-07-25 10:57:39.384088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.835 [2024-07-25 10:57:39.384096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.835 [2024-07-25 10:57:39.384103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.835 [2024-07-25 10:57:39.384110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78872 len:8 PRP1 0x0 PRP2 0x0 00:19:09.835 [2024-07-25 10:57:39.384118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.835 [2024-07-25 10:57:39.384127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.835 [2024-07-25 10:57:39.384133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.836 [2024-07-25 10:57:39.384141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78880 len:8 PRP1 0x0 PRP2 0x0 00:19:09.836 [2024-07-25 10:57:39.384150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.836 [2024-07-25 10:57:39.384158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.836 [2024-07-25 10:57:39.384165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.836 [2024-07-25 10:57:39.384173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78888 len:8 PRP1 0x0 PRP2 0x0 00:19:09.836 [2024-07-25 10:57:39.384182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.836 [2024-07-25 10:57:39.384191] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.836 [2024-07-25 10:57:39.384198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.836 [2024-07-25 10:57:39.384205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78896 len:8 PRP1 0x0 PRP2 0x0 00:19:09.836 [2024-07-25 10:57:39.384214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.836 [2024-07-25 10:57:39.384223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.836 [2024-07-25 10:57:39.384235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.836 [2024-07-25 10:57:39.384242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78904 len:8 PRP1 0x0 PRP2 0x0 00:19:09.836 [2024-07-25 10:57:39.384251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.836 [2024-07-25 10:57:39.384260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.836 [2024-07-25 10:57:39.384267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.836 [2024-07-25 10:57:39.384274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78912 len:8 PRP1 0x0 PRP2 0x0 00:19:09.836 [2024-07-25 10:57:39.384283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.836 [2024-07-25 10:57:39.384292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.836 [2024-07-25 10:57:39.384298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.836 [2024-07-25 10:57:39.384306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78920 len:8 PRP1 0x0 PRP2 0x0 00:19:09.836 [2024-07-25 10:57:39.384314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.836 [2024-07-25 10:57:39.384324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.836 [2024-07-25 10:57:39.384330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.836 [2024-07-25 10:57:39.384338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78928 len:8 PRP1 0x0 PRP2 0x0 00:19:09.836 [2024-07-25 10:57:39.384346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.836 [2024-07-25 10:57:39.384355] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.836 [2024-07-25 10:57:39.384363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.836 [2024-07-25 10:57:39.384370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78936 len:8 PRP1 0x0 PRP2 0x0 00:19:09.836 [2024-07-25 10:57:39.384380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.836 [2024-07-25 10:57:39.384389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.836 [2024-07-25 10:57:39.384396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.836 [2024-07-25 10:57:39.384404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78944 len:8 PRP1 0x0 PRP2 0x0 00:19:09.836 [2024-07-25 10:57:39.384412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.836 [2024-07-25 10:57:39.384421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.836 [2024-07-25 10:57:39.384428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.836 [2024-07-25 10:57:39.384436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78952 len:8 PRP1 0x0 PRP2 0x0 00:19:09.836 [2024-07-25 10:57:39.384444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.836 [2024-07-25 10:57:39.384463] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.836 [2024-07-25 10:57:39.384470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.836 [2024-07-25 10:57:39.384478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78960 len:8 PRP1 0x0 PRP2 0x0 00:19:09.836 [2024-07-25 10:57:39.384486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.836 [2024-07-25 10:57:39.384495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.836 [2024-07-25 10:57:39.384506] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.836 [2024-07-25 10:57:39.384513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78968 len:8 PRP1 0x0 PRP2 0x0 00:19:09.836 [2024-07-25 10:57:39.384522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.836 [2024-07-25 10:57:39.384584] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1fc3700 was disconnected and freed. reset controller. 00:19:09.836 [2024-07-25 10:57:39.384668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:09.836 [2024-07-25 10:57:39.384691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.836 [2024-07-25 10:57:39.398696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:09.836 [2024-07-25 10:57:39.398722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.836 [2024-07-25 10:57:39.398733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:09.836 [2024-07-25 10:57:39.398742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.836 [2024-07-25 10:57:39.398752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:09.836 [2024-07-25 10:57:39.398760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.836 [2024-07-25 10:57:39.398769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f53d40 is same with the state(5) to be set 00:19:09.836 [2024-07-25 10:57:39.398989] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:09.836 [2024-07-25 10:57:39.399015] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f53d40 (9): Bad file descriptor 00:19:09.836 [2024-07-25 10:57:39.399141] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:09.836 [2024-07-25 10:57:39.399162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f53d40 with addr=10.0.0.2, port=4420 00:19:09.836 [2024-07-25 10:57:39.399173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f53d40 is same with the state(5) to be set 00:19:09.836 [2024-07-25 10:57:39.399190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f53d40 (9): Bad file descriptor 00:19:09.836 [2024-07-25 10:57:39.399204] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:09.836 [2024-07-25 10:57:39.399213] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:09.836 [2024-07-25 10:57:39.399223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:09.836 [2024-07-25 10:57:39.399253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:09.836 [2024-07-25 10:57:39.399264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:09.836 10:57:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:19:10.773 [2024-07-25 10:57:40.399441] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:10.773 [2024-07-25 10:57:40.399514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f53d40 with addr=10.0.0.2, port=4420 00:19:10.773 [2024-07-25 10:57:40.399530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f53d40 is same with the state(5) to be set 00:19:10.773 [2024-07-25 10:57:40.399554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f53d40 (9): Bad file descriptor 00:19:10.773 [2024-07-25 10:57:40.399584] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:10.773 [2024-07-25 10:57:40.399594] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:10.773 [2024-07-25 10:57:40.399606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:10.773 [2024-07-25 10:57:40.399633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:10.773 [2024-07-25 10:57:40.399643] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:11.710 [2024-07-25 10:57:41.399832] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:11.710 [2024-07-25 10:57:41.399919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f53d40 with addr=10.0.0.2, port=4420 00:19:11.710 [2024-07-25 10:57:41.399935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f53d40 is same with the state(5) to be set 00:19:11.710 [2024-07-25 10:57:41.399960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f53d40 (9): Bad file descriptor 00:19:11.710 [2024-07-25 10:57:41.399989] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:11.710 [2024-07-25 10:57:41.400011] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:11.710 [2024-07-25 10:57:41.400031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:11.710 [2024-07-25 10:57:41.400056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:11.710 [2024-07-25 10:57:41.400067] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:13.089 [2024-07-25 10:57:42.403743] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:13.089 [2024-07-25 10:57:42.403818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f53d40 with addr=10.0.0.2, port=4420 00:19:13.089 [2024-07-25 10:57:42.403834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f53d40 is same with the state(5) to be set 00:19:13.089 [2024-07-25 10:57:42.404063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f53d40 (9): Bad file descriptor 00:19:13.089 [2024-07-25 10:57:42.404267] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:13.089 [2024-07-25 10:57:42.404280] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:13.089 [2024-07-25 10:57:42.404290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:13.089 [2024-07-25 10:57:42.407615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:13.089 [2024-07-25 10:57:42.407642] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:13.089 10:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:13.089 [2024-07-25 10:57:42.663502] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:13.089 10:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 81862 00:19:14.027 [2024-07-25 10:57:43.445678] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:19.354 00:19:19.354 Latency(us) 00:19:19.354 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:19.354 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:19.354 Verification LBA range: start 0x0 length 0x4000 00:19:19.354 NVMe0n1 : 10.01 4997.42 19.52 4229.14 0.00 13847.46 640.47 3035150.89 00:19:19.354 =================================================================================================================== 00:19:19.354 Total : 4997.42 19.52 4229.14 0.00 13847.46 0.00 3035150.89 00:19:19.354 0 00:19:19.354 10:57:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 81735 00:19:19.354 10:57:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 81735 ']' 00:19:19.354 10:57:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 81735 00:19:19.354 10:57:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:19:19.354 10:57:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:19.354 10:57:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81735 00:19:19.354 killing process with pid 81735 00:19:19.354 Received shutdown signal, test time was about 10.000000 seconds 00:19:19.354 00:19:19.354 Latency(us) 00:19:19.354 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:19.354 =================================================================================================================== 00:19:19.354 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:19.354 10:57:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:19.354 10:57:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:19.354 10:57:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81735' 00:19:19.354 10:57:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 81735 00:19:19.354 10:57:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 81735 00:19:19.354 10:57:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:19:19.354 10:57:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=81978 00:19:19.354 10:57:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 81978 /var/tmp/bdevperf.sock 00:19:19.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:19.354 10:57:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 81978 ']' 00:19:19.354 10:57:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:19.354 10:57:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:19.354 10:57:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:19.354 10:57:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:19.354 10:57:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:19.354 [2024-07-25 10:57:48.680570] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:19.354 [2024-07-25 10:57:48.680990] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81978 ] 00:19:19.354 [2024-07-25 10:57:48.822339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.354 [2024-07-25 10:57:48.974385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:19.354 [2024-07-25 10:57:49.049735] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:20.292 10:57:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:20.292 10:57:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:19:20.292 10:57:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=81994 00:19:20.292 10:57:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81978 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:19:20.292 10:57:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:19:20.292 10:57:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:20.859 NVMe0n1 00:19:20.859 10:57:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82034 00:19:20.859 10:57:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:20.859 10:57:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:19:20.859 Running I/O for 10 seconds... 00:19:21.796 10:57:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:22.057 [2024-07-25 10:57:51.608158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:106528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.057 [2024-07-25 10:57:51.608499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.057 [2024-07-25 10:57:51.608551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:130304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.057 [2024-07-25 10:57:51.608562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.057 [2024-07-25 10:57:51.608573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:109056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.057 [2024-07-25 10:57:51.608583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.057 [2024-07-25 10:57:51.608594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:40280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.057 [2024-07-25 10:57:51.608604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.057 [2024-07-25 10:57:51.608614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:106440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.057 [2024-07-25 10:57:51.608623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.057 [2024-07-25 10:57:51.608633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:56024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.057 [2024-07-25 10:57:51.608642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.057 [2024-07-25 10:57:51.608653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:71624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.057 [2024-07-25 10:57:51.608662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.057 [2024-07-25 10:57:51.608672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:66872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.057 [2024-07-25 10:57:51.608681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.057 [2024-07-25 10:57:51.608691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:28704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.057 [2024-07-25 10:57:51.608701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.057 [2024-07-25 10:57:51.608711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.058 [2024-07-25 10:57:51.608720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.058 [2024-07-25 10:57:51.608731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:42840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.058 [2024-07-25 10:57:51.608739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.058 [2024-07-25 10:57:51.608750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:123176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.058 [2024-07-25 10:57:51.608759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.058 [2024-07-25 10:57:51.608770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:117984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.058 [2024-07-25 10:57:51.608779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.058 [2024-07-25 10:57:51.608793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:51600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.058 [2024-07-25 10:57:51.608802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.058 [2024-07-25 10:57:51.608813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:26968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.058 [2024-07-25 10:57:51.608837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.058 [2024-07-25 10:57:51.608847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:48528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.058 [2024-07-25 10:57:51.608856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.058 [2024-07-25 10:57:51.608866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:48968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.058 [2024-07-25 10:57:51.608875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.058 [2024-07-25 10:57:51.608907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:112448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.058 [2024-07-25 10:57:51.608919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.058 [2024-07-25 10:57:51.608930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:74664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.058 [2024-07-25 10:57:51.608939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.058 [2024-07-25 10:57:51.608949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.058 [2024-07-25 10:57:51.608958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.058 [2024-07-25 10:57:51.608969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:44584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.058 [2024-07-25 10:57:51.608978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.058 [2024-07-25 10:57:51.608988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:102080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.058 [2024-07-25 10:57:51.608997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.058 [2024-07-25 10:57:51.609007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:25056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.058 [2024-07-25 10:57:51.609015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.058 [2024-07-25 10:57:51.609026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:125152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.058 [2024-07-25 10:57:51.609034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.058 [2024-07-25 10:57:51.609044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:60864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.058 [2024-07-25 10:57:51.609053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.058 [2024-07-25 10:57:51.609063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.058 [2024-07-25 10:57:51.609072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.058 [2024-07-25 10:57:51.609083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:58624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.058 [2024-07-25 10:57:51.609091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.058 [2024-07-25 10:57:51.609101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:114704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.058 [2024-07-25 10:57:51.609110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.058 [2024-07-25 10:57:51.609120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:61392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.058 [2024-07-25 10:57:51.609129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.058 [2024-07-25 10:57:51.609139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:120184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.058 [2024-07-25 10:57:51.609147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.058 [2024-07-25 10:57:51.609157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:62352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.058 [2024-07-25 10:57:51.609165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.058 [2024-07-25 10:57:51.609175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:59464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.058 [2024-07-25 10:57:51.609183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.058 [2024-07-25 10:57:51.609193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:119704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.058 [2024-07-25 10:57:51.609202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.058 [2024-07-25 10:57:51.609213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:101744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.058 [2024-07-25 10:57:51.609221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.058 [2024-07-25 10:57:51.609232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:67984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.058 [2024-07-25 10:57:51.609240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.058 [2024-07-25 10:57:51.609250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:81264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.058 [2024-07-25 10:57:51.609259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.058 [2024-07-25 10:57:51.609269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:54760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.058 [2024-07-25 10:57:51.609277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.058 [2024-07-25 10:57:51.609287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:123104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.058 [2024-07-25 10:57:51.609295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.058 [2024-07-25 10:57:51.609306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:67712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.058 [2024-07-25 10:57:51.609314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.058 [2024-07-25 10:57:51.609325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.058 [2024-07-25 10:57:51.609334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.058 [2024-07-25 10:57:51.609344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:114944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.058 [2024-07-25 10:57:51.609352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.058 [2024-07-25 10:57:51.609363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:67544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.058 [2024-07-25 10:57:51.609371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.058 [2024-07-25 10:57:51.609381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:108400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.058 [2024-07-25 10:57:51.609389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.058 [2024-07-25 10:57:51.609399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.058 [2024-07-25 10:57:51.609408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.058 [2024-07-25 10:57:51.609418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:81056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.058 [2024-07-25 10:57:51.609426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.058 [2024-07-25 10:57:51.609436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.058 [2024-07-25 10:57:51.609444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.058 [2024-07-25 10:57:51.609455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:66256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.058 [2024-07-25 10:57:51.609463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.058 [2024-07-25 10:57:51.609473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:49112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.058 [2024-07-25 10:57:51.609481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.058 [2024-07-25 10:57:51.609491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:65032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.058 [2024-07-25 10:57:51.609499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.059 [2024-07-25 10:57:51.609510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:120992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.059 [2024-07-25 10:57:51.609519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.059 [2024-07-25 10:57:51.609529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:101560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.059 [2024-07-25 10:57:51.609539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.059 [2024-07-25 10:57:51.609550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:93600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.059 [2024-07-25 10:57:51.609559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.059 [2024-07-25 10:57:51.609569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.059 [2024-07-25 10:57:51.609577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.059 [2024-07-25 10:57:51.609587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:68144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.059 [2024-07-25 10:57:51.609596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.059 [2024-07-25 10:57:51.609606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:81056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.059 [2024-07-25 10:57:51.609615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.059 [2024-07-25 10:57:51.609625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.059 [2024-07-25 10:57:51.609633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.059 [2024-07-25 10:57:51.609643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.059 [2024-07-25 10:57:51.609651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.059 [2024-07-25 10:57:51.609661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:48864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.059 [2024-07-25 10:57:51.609670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.059 [2024-07-25 10:57:51.609680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:57240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.059 [2024-07-25 10:57:51.609688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.059 [2024-07-25 10:57:51.609699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:86704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.059 [2024-07-25 10:57:51.609707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.059 [2024-07-25 10:57:51.609717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.059 [2024-07-25 10:57:51.609726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.059 [2024-07-25 10:57:51.609736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.059 [2024-07-25 10:57:51.609745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.059 [2024-07-25 10:57:51.609755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:34848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.059 [2024-07-25 10:57:51.609763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.059 [2024-07-25 10:57:51.609774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.059 [2024-07-25 10:57:51.609783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.059 [2024-07-25 10:57:51.609793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.059 [2024-07-25 10:57:51.609801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.059 [2024-07-25 10:57:51.609813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.059 [2024-07-25 10:57:51.609822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.059 [2024-07-25 10:57:51.609832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.059 [2024-07-25 10:57:51.609841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.059 [2024-07-25 10:57:51.609863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:27104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.059 [2024-07-25 10:57:51.609873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.059 [2024-07-25 10:57:51.609883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.059 [2024-07-25 10:57:51.609892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.059 [2024-07-25 10:57:51.609902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:107448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.059 [2024-07-25 10:57:51.609911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.059 [2024-07-25 10:57:51.609922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:69456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.059 [2024-07-25 10:57:51.609931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.059 [2024-07-25 10:57:51.609941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.059 [2024-07-25 10:57:51.609950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.059 [2024-07-25 10:57:51.609960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:44152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.059 [2024-07-25 10:57:51.609969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.059 [2024-07-25 10:57:51.609979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:49240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.059 [2024-07-25 10:57:51.609988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.059 [2024-07-25 10:57:51.610006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.059 [2024-07-25 10:57:51.610033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.059 [2024-07-25 10:57:51.610044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.059 [2024-07-25 10:57:51.610053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.059 [2024-07-25 10:57:51.610064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.059 [2024-07-25 10:57:51.610073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.059 [2024-07-25 10:57:51.610084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.059 [2024-07-25 10:57:51.610092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.059 [2024-07-25 10:57:51.610103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.059 [2024-07-25 10:57:51.610112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.059 [2024-07-25 10:57:51.610122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:88440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.059 [2024-07-25 10:57:51.610131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.059 [2024-07-25 10:57:51.610141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:72040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.059 [2024-07-25 10:57:51.610150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.059 [2024-07-25 10:57:51.610162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:45528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.059 [2024-07-25 10:57:51.610171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.059 [2024-07-25 10:57:51.610182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:118496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.059 [2024-07-25 10:57:51.610191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.059 [2024-07-25 10:57:51.610202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.059 [2024-07-25 10:57:51.610210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.059 [2024-07-25 10:57:51.610221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:85904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.059 [2024-07-25 10:57:51.610235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.059 [2024-07-25 10:57:51.610246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:91768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.059 [2024-07-25 10:57:51.610254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.059 [2024-07-25 10:57:51.610265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.059 [2024-07-25 10:57:51.610274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.059 [2024-07-25 10:57:51.610284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:124152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.059 [2024-07-25 10:57:51.610293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.059 [2024-07-25 10:57:51.610304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.060 [2024-07-25 10:57:51.610313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.060 [2024-07-25 10:57:51.610324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:58704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.060 [2024-07-25 10:57:51.610333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.060 [2024-07-25 10:57:51.610343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.060 [2024-07-25 10:57:51.610352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.060 [2024-07-25 10:57:51.610362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:116632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.060 [2024-07-25 10:57:51.610371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.060 [2024-07-25 10:57:51.610382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:108672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.060 [2024-07-25 10:57:51.610390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.060 [2024-07-25 10:57:51.610401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.060 [2024-07-25 10:57:51.610411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.060 [2024-07-25 10:57:51.610421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:104632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.060 [2024-07-25 10:57:51.610430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.060 [2024-07-25 10:57:51.610441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.060 [2024-07-25 10:57:51.610465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.060 [2024-07-25 10:57:51.610476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:126040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.060 [2024-07-25 10:57:51.610484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.060 [2024-07-25 10:57:51.610495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:94696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.060 [2024-07-25 10:57:51.610503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.060 [2024-07-25 10:57:51.610514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.060 [2024-07-25 10:57:51.610522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.060 [2024-07-25 10:57:51.610533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:51880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.060 [2024-07-25 10:57:51.610541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.060 [2024-07-25 10:57:51.610551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.060 [2024-07-25 10:57:51.610560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.060 [2024-07-25 10:57:51.610571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.060 [2024-07-25 10:57:51.610580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.060 [2024-07-25 10:57:51.610591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:46136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.060 [2024-07-25 10:57:51.610599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.060 [2024-07-25 10:57:51.610609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.060 [2024-07-25 10:57:51.610618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.060 [2024-07-25 10:57:51.610629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:105088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.060 [2024-07-25 10:57:51.610637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.060 [2024-07-25 10:57:51.610648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.060 [2024-07-25 10:57:51.610656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.060 [2024-07-25 10:57:51.610666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:87240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.060 [2024-07-25 10:57:51.610674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.060 [2024-07-25 10:57:51.610684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.060 [2024-07-25 10:57:51.610693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.060 [2024-07-25 10:57:51.610703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.060 [2024-07-25 10:57:51.610712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.060 [2024-07-25 10:57:51.610722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.060 [2024-07-25 10:57:51.610731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.060 [2024-07-25 10:57:51.610742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.060 [2024-07-25 10:57:51.610751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.060 [2024-07-25 10:57:51.610762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:62064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.060 [2024-07-25 10:57:51.610770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.060 [2024-07-25 10:57:51.610791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:55648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.060 [2024-07-25 10:57:51.610800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.060 [2024-07-25 10:57:51.610811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:43752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.060 [2024-07-25 10:57:51.610820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.060 [2024-07-25 10:57:51.610830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:127528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.060 [2024-07-25 10:57:51.610839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.060 [2024-07-25 10:57:51.610849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.060 [2024-07-25 10:57:51.610858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.060 [2024-07-25 10:57:51.610868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:40104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.060 [2024-07-25 10:57:51.610876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.060 [2024-07-25 10:57:51.611329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:43920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.060 [2024-07-25 10:57:51.611397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.060 [2024-07-25 10:57:51.611781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:43128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.060 [2024-07-25 10:57:51.611932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.060 [2024-07-25 10:57:51.612055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.060 [2024-07-25 10:57:51.612190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.060 [2024-07-25 10:57:51.612301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:125192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.060 [2024-07-25 10:57:51.612433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.060 [2024-07-25 10:57:51.612608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:130456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.060 [2024-07-25 10:57:51.612729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.060 [2024-07-25 10:57:51.612867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.060 [2024-07-25 10:57:51.612987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.060 [2024-07-25 10:57:51.613104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.060 [2024-07-25 10:57:51.613211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.060 [2024-07-25 10:57:51.613276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:37944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.060 [2024-07-25 10:57:51.613398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.060 [2024-07-25 10:57:51.613458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:100520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.060 [2024-07-25 10:57:51.613557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.060 [2024-07-25 10:57:51.613724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.060 [2024-07-25 10:57:51.613903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.060 [2024-07-25 10:57:51.614033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f976a0 is same with the state(5) to be set 00:19:22.060 [2024-07-25 10:57:51.614053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:22.060 [2024-07-25 10:57:51.614062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:22.061 [2024-07-25 10:57:51.614071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86320 len:8 PRP1 0x0 PRP2 0x0 00:19:22.061 [2024-07-25 10:57:51.614081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.061 [2024-07-25 10:57:51.614147] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f976a0 was disconnected and freed. reset controller. 00:19:22.061 [2024-07-25 10:57:51.614242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.061 [2024-07-25 10:57:51.614258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.061 [2024-07-25 10:57:51.614270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.061 [2024-07-25 10:57:51.614279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.061 [2024-07-25 10:57:51.614289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.061 [2024-07-25 10:57:51.614298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.061 [2024-07-25 10:57:51.614323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.061 [2024-07-25 10:57:51.614332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.061 [2024-07-25 10:57:51.614340] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46c00 is same with the state(5) to be set 00:19:22.061 [2024-07-25 10:57:51.614621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:22.061 [2024-07-25 10:57:51.614645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46c00 (9): Bad file descriptor 00:19:22.061 [2024-07-25 10:57:51.614741] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:22.061 [2024-07-25 10:57:51.614761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46c00 with addr=10.0.0.2, port=4420 00:19:22.061 [2024-07-25 10:57:51.614771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46c00 is same with the state(5) to be set 00:19:22.061 [2024-07-25 10:57:51.614788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46c00 (9): Bad file descriptor 00:19:22.061 [2024-07-25 10:57:51.614803] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:22.061 [2024-07-25 10:57:51.614812] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:22.061 [2024-07-25 10:57:51.614822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:22.061 [2024-07-25 10:57:51.614839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:22.061 [2024-07-25 10:57:51.614862] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:22.061 10:57:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 82034 00:19:23.963 [2024-07-25 10:57:53.615203] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:23.963 [2024-07-25 10:57:53.615536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46c00 with addr=10.0.0.2, port=4420 00:19:23.963 [2024-07-25 10:57:53.615690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46c00 is same with the state(5) to be set 00:19:23.963 [2024-07-25 10:57:53.616025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46c00 (9): Bad file descriptor 00:19:23.963 [2024-07-25 10:57:53.616307] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:23.963 [2024-07-25 10:57:53.616542] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:23.963 [2024-07-25 10:57:53.616797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:23.963 [2024-07-25 10:57:53.617021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:23.963 [2024-07-25 10:57:53.617171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:26.523 [2024-07-25 10:57:55.617510] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:26.523 [2024-07-25 10:57:55.617825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46c00 with addr=10.0.0.2, port=4420 00:19:26.523 [2024-07-25 10:57:55.618008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46c00 is same with the state(5) to be set 00:19:26.523 [2024-07-25 10:57:55.618170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46c00 (9): Bad file descriptor 00:19:26.523 [2024-07-25 10:57:55.618449] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:26.523 [2024-07-25 10:57:55.618634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:26.523 [2024-07-25 10:57:55.618654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:26.523 [2024-07-25 10:57:55.618687] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:26.523 [2024-07-25 10:57:55.618701] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:27.900 [2024-07-25 10:57:57.618776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:27.900 [2024-07-25 10:57:57.618847] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:27.900 [2024-07-25 10:57:57.618886] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:27.900 [2024-07-25 10:57:57.618897] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:19:27.900 [2024-07-25 10:57:57.618926] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:29.278 00:19:29.278 Latency(us) 00:19:29.278 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.278 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:19:29.278 NVMe0n1 : 8.18 2188.13 8.55 15.64 0.00 58013.37 7626.01 7046430.72 00:19:29.278 =================================================================================================================== 00:19:29.278 Total : 2188.13 8.55 15.64 0.00 58013.37 7626.01 7046430.72 00:19:29.278 0 00:19:29.278 10:57:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:29.278 Attaching 5 probes... 00:19:29.278 1313.114389: reset bdev controller NVMe0 00:19:29.278 1313.180589: reconnect bdev controller NVMe0 00:19:29.278 3313.561423: reconnect delay bdev controller NVMe0 00:19:29.278 3313.583788: reconnect bdev controller NVMe0 00:19:29.278 5315.878732: reconnect delay bdev controller NVMe0 00:19:29.278 5315.901282: reconnect bdev controller NVMe0 00:19:29.278 7317.249492: reconnect delay bdev controller NVMe0 00:19:29.278 7317.272545: reconnect bdev controller NVMe0 00:19:29.278 10:57:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:19:29.278 10:57:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:19:29.278 10:57:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 81994 00:19:29.278 10:57:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:29.278 10:57:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 81978 00:19:29.279 10:57:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 81978 ']' 00:19:29.279 10:57:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 81978 00:19:29.279 10:57:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:19:29.279 10:57:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:29.279 10:57:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81978 00:19:29.279 killing process with pid 81978 00:19:29.279 Received shutdown signal, test time was about 8.241909 seconds 00:19:29.279 00:19:29.279 Latency(us) 00:19:29.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.279 =================================================================================================================== 00:19:29.279 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:29.279 10:57:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:29.279 10:57:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:29.279 10:57:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81978' 00:19:29.279 10:57:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 81978 00:19:29.279 10:57:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 81978 00:19:29.279 10:57:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:29.538 10:57:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:19:29.538 10:57:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:19:29.538 10:57:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:29.538 10:57:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:19:29.797 10:57:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:29.797 10:57:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:19:29.797 10:57:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:29.797 10:57:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:29.797 rmmod nvme_tcp 00:19:29.797 rmmod nvme_fabrics 00:19:29.797 rmmod nvme_keyring 00:19:29.797 10:57:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:29.797 10:57:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:19:29.797 10:57:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:19:29.797 10:57:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 81540 ']' 00:19:29.797 10:57:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 81540 00:19:29.797 10:57:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 81540 ']' 00:19:29.797 10:57:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 81540 00:19:29.797 10:57:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:19:29.797 10:57:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:29.797 10:57:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81540 00:19:29.797 10:57:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:29.797 10:57:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:29.797 killing process with pid 81540 00:19:29.797 10:57:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81540' 00:19:29.797 10:57:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 81540 00:19:29.797 10:57:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 81540 00:19:30.056 10:57:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:30.056 10:57:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:30.056 10:57:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:30.056 10:57:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:30.056 10:57:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:30.056 10:57:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.056 10:57:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:30.056 10:57:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.056 10:57:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:30.056 00:19:30.056 real 0m47.493s 00:19:30.056 user 2m19.158s 00:19:30.056 sys 0m5.785s 00:19:30.056 10:57:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:30.056 10:57:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:30.056 ************************************ 00:19:30.056 END TEST nvmf_timeout 00:19:30.056 ************************************ 00:19:30.056 10:57:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:19:30.056 10:57:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:30.056 00:19:30.056 real 5m10.019s 00:19:30.056 user 13m30.854s 00:19:30.056 sys 1m10.512s 00:19:30.056 10:57:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:30.056 10:57:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.056 ************************************ 00:19:30.056 END TEST nvmf_host 00:19:30.056 ************************************ 00:19:30.316 00:19:30.316 real 12m20.685s 00:19:30.316 user 29m58.332s 00:19:30.316 sys 3m6.683s 00:19:30.316 10:57:59 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:30.316 ************************************ 00:19:30.316 END TEST nvmf_tcp 00:19:30.316 ************************************ 00:19:30.316 10:57:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:30.316 10:57:59 -- spdk/autotest.sh@292 -- # [[ 1 -eq 0 ]] 00:19:30.316 10:57:59 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:30.316 10:57:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:30.316 10:57:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:30.316 10:57:59 -- common/autotest_common.sh@10 -- # set +x 00:19:30.316 ************************************ 00:19:30.316 START TEST nvmf_dif 00:19:30.316 ************************************ 00:19:30.316 10:57:59 nvmf_dif -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:30.316 * Looking for test storage... 00:19:30.316 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:30.316 10:57:59 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:30.316 10:57:59 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:19:30.316 10:57:59 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:30.316 10:57:59 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:30.316 10:57:59 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:30.316 10:57:59 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:30.316 10:57:59 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:30.316 10:57:59 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:30.316 10:57:59 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:30.316 10:57:59 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:30.316 10:57:59 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:30.316 10:57:59 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:30.316 10:57:59 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:19:30.316 10:57:59 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=bb4b8bd3-cfb4-4368-bf29-91254747069c 00:19:30.316 10:57:59 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:30.316 10:57:59 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:30.316 10:57:59 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:30.316 10:57:59 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:30.316 10:57:59 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:30.316 10:57:59 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:30.316 10:57:59 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:30.316 10:57:59 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:30.316 10:57:59 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.316 10:57:59 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.316 10:57:59 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.316 10:57:59 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:19:30.316 10:57:59 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.316 10:57:59 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:19:30.316 10:57:59 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:30.316 10:57:59 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:30.316 10:57:59 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:30.316 10:57:59 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:30.316 10:57:59 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:30.316 10:57:59 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:30.316 10:57:59 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:30.316 10:57:59 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:30.316 10:57:59 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:19:30.316 10:57:59 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:19:30.316 10:57:59 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:19:30.316 10:57:59 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:19:30.316 10:57:59 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:19:30.317 10:57:59 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:30.317 10:57:59 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:30.317 10:57:59 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:30.317 10:57:59 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:30.317 10:57:59 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:30.317 10:57:59 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.317 10:57:59 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:30.317 10:57:59 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.317 10:57:59 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:30.317 10:57:59 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:30.317 10:57:59 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:30.317 10:57:59 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:30.317 10:57:59 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:30.317 10:57:59 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:30.317 10:57:59 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:30.317 10:57:59 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:30.317 10:57:59 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:30.317 10:57:59 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:30.317 10:57:59 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:30.317 10:57:59 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:30.317 10:57:59 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:30.317 10:57:59 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:30.317 10:57:59 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:30.317 10:57:59 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:30.317 10:57:59 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:30.317 10:57:59 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:30.317 10:57:59 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:30.317 10:58:00 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:30.317 Cannot find device "nvmf_tgt_br" 00:19:30.317 10:58:00 nvmf_dif -- nvmf/common.sh@155 -- # true 00:19:30.317 10:58:00 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:30.317 Cannot find device "nvmf_tgt_br2" 00:19:30.317 10:58:00 nvmf_dif -- nvmf/common.sh@156 -- # true 00:19:30.317 10:58:00 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:30.317 10:58:00 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:30.576 Cannot find device "nvmf_tgt_br" 00:19:30.576 10:58:00 nvmf_dif -- nvmf/common.sh@158 -- # true 00:19:30.576 10:58:00 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:30.576 Cannot find device "nvmf_tgt_br2" 00:19:30.576 10:58:00 nvmf_dif -- nvmf/common.sh@159 -- # true 00:19:30.576 10:58:00 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:30.576 10:58:00 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:30.576 10:58:00 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:30.576 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:30.576 10:58:00 nvmf_dif -- nvmf/common.sh@162 -- # true 00:19:30.576 10:58:00 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:30.576 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:30.576 10:58:00 nvmf_dif -- nvmf/common.sh@163 -- # true 00:19:30.576 10:58:00 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:30.576 10:58:00 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:30.576 10:58:00 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:30.576 10:58:00 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:30.576 10:58:00 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:30.576 10:58:00 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:30.576 10:58:00 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:30.576 10:58:00 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:30.576 10:58:00 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:30.576 10:58:00 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:30.576 10:58:00 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:30.576 10:58:00 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:30.576 10:58:00 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:30.576 10:58:00 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:30.576 10:58:00 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:30.576 10:58:00 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:30.576 10:58:00 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:30.576 10:58:00 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:30.576 10:58:00 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:30.576 10:58:00 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:30.576 10:58:00 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:30.576 10:58:00 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:30.576 10:58:00 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:30.576 10:58:00 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:30.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:30.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:19:30.576 00:19:30.576 --- 10.0.0.2 ping statistics --- 00:19:30.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.576 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:19:30.576 10:58:00 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:30.576 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:30.576 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:19:30.576 00:19:30.576 --- 10.0.0.3 ping statistics --- 00:19:30.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.576 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:19:30.576 10:58:00 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:30.576 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:30.576 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:19:30.576 00:19:30.576 --- 10.0.0.1 ping statistics --- 00:19:30.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.576 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:19:30.576 10:58:00 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:30.576 10:58:00 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:19:30.576 10:58:00 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:19:30.576 10:58:00 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:31.143 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:31.143 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:31.143 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:31.143 10:58:00 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:31.143 10:58:00 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:31.143 10:58:00 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:31.143 10:58:00 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:31.143 10:58:00 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:31.143 10:58:00 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:31.143 10:58:00 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:19:31.143 10:58:00 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:19:31.143 10:58:00 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:31.143 10:58:00 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:31.144 10:58:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:31.144 10:58:00 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=82475 00:19:31.144 10:58:00 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:31.144 10:58:00 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 82475 00:19:31.144 10:58:00 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 82475 ']' 00:19:31.144 10:58:00 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.144 10:58:00 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:31.144 10:58:00 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.144 10:58:00 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:31.144 10:58:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:31.144 [2024-07-25 10:58:00.782404] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:31.144 [2024-07-25 10:58:00.782533] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:31.402 [2024-07-25 10:58:00.929547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.402 [2024-07-25 10:58:01.060544] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:31.402 [2024-07-25 10:58:01.060625] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:31.402 [2024-07-25 10:58:01.060647] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:31.402 [2024-07-25 10:58:01.060668] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:31.402 [2024-07-25 10:58:01.060682] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:31.402 [2024-07-25 10:58:01.060731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.402 [2024-07-25 10:58:01.118145] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:32.339 10:58:01 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:32.339 10:58:01 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:19:32.339 10:58:01 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:32.339 10:58:01 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:32.339 10:58:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:32.339 10:58:01 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:32.339 10:58:01 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:19:32.339 10:58:01 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:19:32.339 10:58:01 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.339 10:58:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:32.339 [2024-07-25 10:58:01.760902] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:32.339 10:58:01 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.339 10:58:01 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:19:32.339 10:58:01 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:32.339 10:58:01 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:32.339 10:58:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:32.339 ************************************ 00:19:32.339 START TEST fio_dif_1_default 00:19:32.339 ************************************ 00:19:32.339 10:58:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:19:32.339 10:58:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:19:32.339 10:58:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:19:32.339 10:58:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:19:32.339 10:58:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:19:32.339 10:58:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:19:32.339 10:58:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:32.339 10:58:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.339 10:58:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:32.339 bdev_null0 00:19:32.339 10:58:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.339 10:58:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:32.339 10:58:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.339 10:58:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:32.339 10:58:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.339 10:58:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:32.339 10:58:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.339 10:58:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:32.339 10:58:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.339 10:58:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:32.339 10:58:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.339 10:58:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:32.339 [2024-07-25 10:58:01.805035] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:32.339 10:58:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.339 10:58:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:19:32.339 10:58:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:19:32.339 10:58:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:32.339 10:58:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:19:32.339 10:58:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:19:32.339 10:58:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:32.339 10:58:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:32.340 10:58:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:32.340 { 00:19:32.340 "params": { 00:19:32.340 "name": "Nvme$subsystem", 00:19:32.340 "trtype": "$TEST_TRANSPORT", 00:19:32.340 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:32.340 "adrfam": "ipv4", 00:19:32.340 "trsvcid": "$NVMF_PORT", 00:19:32.340 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:32.340 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:32.340 "hdgst": ${hdgst:-false}, 00:19:32.340 "ddgst": ${ddgst:-false} 00:19:32.340 }, 00:19:32.340 "method": "bdev_nvme_attach_controller" 00:19:32.340 } 00:19:32.340 EOF 00:19:32.340 )") 00:19:32.340 10:58:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:19:32.340 10:58:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:32.340 10:58:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:19:32.340 10:58:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:32.340 10:58:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:19:32.340 10:58:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:32.340 10:58:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:32.340 10:58:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:32.340 10:58:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:19:32.340 10:58:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:32.340 10:58:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:32.340 10:58:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:19:32.340 10:58:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:19:32.340 10:58:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:19:32.340 10:58:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:32.340 10:58:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:19:32.340 10:58:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:32.340 10:58:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:19:32.340 10:58:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:19:32.340 10:58:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:32.340 "params": { 00:19:32.340 "name": "Nvme0", 00:19:32.340 "trtype": "tcp", 00:19:32.340 "traddr": "10.0.0.2", 00:19:32.340 "adrfam": "ipv4", 00:19:32.340 "trsvcid": "4420", 00:19:32.340 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:32.340 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:32.340 "hdgst": false, 00:19:32.340 "ddgst": false 00:19:32.340 }, 00:19:32.340 "method": "bdev_nvme_attach_controller" 00:19:32.340 }' 00:19:32.340 10:58:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:32.340 10:58:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:32.340 10:58:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:32.340 10:58:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:32.340 10:58:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:32.340 10:58:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:32.340 10:58:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:32.340 10:58:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:32.340 10:58:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:32.340 10:58:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:32.340 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:32.340 fio-3.35 00:19:32.340 Starting 1 thread 00:19:44.552 00:19:44.552 filename0: (groupid=0, jobs=1): err= 0: pid=82542: Thu Jul 25 10:58:12 2024 00:19:44.552 read: IOPS=8521, BW=33.3MiB/s (34.9MB/s)(333MiB/10001msec) 00:19:44.552 slat (usec): min=6, max=383, avg= 8.77, stdev= 4.01 00:19:44.552 clat (usec): min=369, max=3503, avg=443.48, stdev=43.51 00:19:44.552 lat (usec): min=376, max=3549, avg=452.25, stdev=44.07 00:19:44.552 clat percentiles (usec): 00:19:44.552 | 1.00th=[ 392], 5.00th=[ 408], 10.00th=[ 416], 20.00th=[ 424], 00:19:44.552 | 30.00th=[ 429], 40.00th=[ 433], 50.00th=[ 441], 60.00th=[ 445], 00:19:44.552 | 70.00th=[ 453], 80.00th=[ 457], 90.00th=[ 474], 95.00th=[ 486], 00:19:44.552 | 99.00th=[ 586], 99.50th=[ 619], 99.90th=[ 693], 99.95th=[ 783], 00:19:44.552 | 99.99th=[ 1549] 00:19:44.552 bw ( KiB/s): min=32096, max=35520, per=100.00%, avg=34122.11, stdev=789.05, samples=19 00:19:44.552 iops : min= 8024, max= 8880, avg=8530.53, stdev=197.26, samples=19 00:19:44.552 lat (usec) : 500=97.25%, 750=2.70%, 1000=0.02% 00:19:44.552 lat (msec) : 2=0.02%, 4=0.01% 00:19:44.552 cpu : usr=84.33%, sys=13.50%, ctx=71, majf=0, minf=0 00:19:44.552 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:44.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:44.552 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:44.552 issued rwts: total=85227,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:44.552 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:44.552 00:19:44.552 Run status group 0 (all jobs): 00:19:44.552 READ: bw=33.3MiB/s (34.9MB/s), 33.3MiB/s-33.3MiB/s (34.9MB/s-34.9MB/s), io=333MiB (349MB), run=10001-10001msec 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.552 00:19:44.552 real 0m11.054s 00:19:44.552 user 0m9.073s 00:19:44.552 sys 0m1.669s 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:44.552 ************************************ 00:19:44.552 END TEST fio_dif_1_default 00:19:44.552 ************************************ 00:19:44.552 10:58:12 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:19:44.552 10:58:12 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:44.552 10:58:12 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:44.552 10:58:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:44.552 ************************************ 00:19:44.552 START TEST fio_dif_1_multi_subsystems 00:19:44.552 ************************************ 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:44.552 bdev_null0 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:44.552 [2024-07-25 10:58:12.913451] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:44.552 bdev_null1 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.552 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:44.553 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.553 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:44.553 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.553 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:44.553 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.553 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:19:44.553 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:19:44.553 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:19:44.553 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:19:44.553 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:19:44.553 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:44.553 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:44.553 { 00:19:44.553 "params": { 00:19:44.553 "name": "Nvme$subsystem", 00:19:44.553 "trtype": "$TEST_TRANSPORT", 00:19:44.553 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:44.553 "adrfam": "ipv4", 00:19:44.553 "trsvcid": "$NVMF_PORT", 00:19:44.553 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:44.553 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:44.553 "hdgst": ${hdgst:-false}, 00:19:44.553 "ddgst": ${ddgst:-false} 00:19:44.553 }, 00:19:44.553 "method": "bdev_nvme_attach_controller" 00:19:44.553 } 00:19:44.553 EOF 00:19:44.553 )") 00:19:44.553 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:44.553 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:44.553 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:19:44.553 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:19:44.553 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:44.553 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:19:44.553 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:44.553 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:44.553 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:44.553 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:19:44.553 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:44.553 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:44.553 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:19:44.553 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:44.553 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:19:44.553 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:19:44.553 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:44.553 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:19:44.553 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:19:44.553 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:44.553 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:44.553 { 00:19:44.553 "params": { 00:19:44.553 "name": "Nvme$subsystem", 00:19:44.553 "trtype": "$TEST_TRANSPORT", 00:19:44.553 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:44.553 "adrfam": "ipv4", 00:19:44.553 "trsvcid": "$NVMF_PORT", 00:19:44.553 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:44.553 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:44.553 "hdgst": ${hdgst:-false}, 00:19:44.553 "ddgst": ${ddgst:-false} 00:19:44.553 }, 00:19:44.553 "method": "bdev_nvme_attach_controller" 00:19:44.553 } 00:19:44.553 EOF 00:19:44.553 )") 00:19:44.553 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:19:44.553 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:19:44.553 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:19:44.553 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:19:44.553 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:19:44.553 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:44.553 "params": { 00:19:44.553 "name": "Nvme0", 00:19:44.553 "trtype": "tcp", 00:19:44.553 "traddr": "10.0.0.2", 00:19:44.553 "adrfam": "ipv4", 00:19:44.553 "trsvcid": "4420", 00:19:44.553 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:44.553 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:44.553 "hdgst": false, 00:19:44.553 "ddgst": false 00:19:44.553 }, 00:19:44.553 "method": "bdev_nvme_attach_controller" 00:19:44.553 },{ 00:19:44.553 "params": { 00:19:44.553 "name": "Nvme1", 00:19:44.553 "trtype": "tcp", 00:19:44.553 "traddr": "10.0.0.2", 00:19:44.553 "adrfam": "ipv4", 00:19:44.553 "trsvcid": "4420", 00:19:44.553 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:44.553 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:44.553 "hdgst": false, 00:19:44.553 "ddgst": false 00:19:44.553 }, 00:19:44.553 "method": "bdev_nvme_attach_controller" 00:19:44.553 }' 00:19:44.553 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:44.553 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:44.553 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:44.553 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:44.553 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:44.553 10:58:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:44.553 10:58:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:44.553 10:58:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:44.553 10:58:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:44.553 10:58:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:44.553 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:44.553 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:44.553 fio-3.35 00:19:44.553 Starting 2 threads 00:19:54.530 00:19:54.530 filename0: (groupid=0, jobs=1): err= 0: pid=82706: Thu Jul 25 10:58:23 2024 00:19:54.530 read: IOPS=4721, BW=18.4MiB/s (19.3MB/s)(184MiB/10001msec) 00:19:54.530 slat (usec): min=4, max=425, avg=13.50, stdev= 5.16 00:19:54.530 clat (usec): min=513, max=3157, avg=810.54, stdev=62.50 00:19:54.530 lat (usec): min=521, max=3169, avg=824.04, stdev=62.86 00:19:54.530 clat percentiles (usec): 00:19:54.530 | 1.00th=[ 717], 5.00th=[ 742], 10.00th=[ 758], 20.00th=[ 775], 00:19:54.530 | 30.00th=[ 791], 40.00th=[ 799], 50.00th=[ 807], 60.00th=[ 816], 00:19:54.530 | 70.00th=[ 824], 80.00th=[ 832], 90.00th=[ 857], 95.00th=[ 881], 00:19:54.530 | 99.00th=[ 1020], 99.50th=[ 1057], 99.90th=[ 1385], 99.95th=[ 1729], 00:19:54.530 | 99.99th=[ 1893] 00:19:54.530 bw ( KiB/s): min=17664, max=19264, per=49.99%, avg=18886.74, stdev=435.06, samples=19 00:19:54.530 iops : min= 4416, max= 4816, avg=4721.68, stdev=108.77, samples=19 00:19:54.530 lat (usec) : 750=6.25%, 1000=92.01% 00:19:54.530 lat (msec) : 2=1.73%, 4=0.01% 00:19:54.530 cpu : usr=89.64%, sys=8.62%, ctx=84, majf=0, minf=9 00:19:54.530 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:54.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.530 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.530 issued rwts: total=47216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:54.530 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:54.530 filename1: (groupid=0, jobs=1): err= 0: pid=82707: Thu Jul 25 10:58:23 2024 00:19:54.530 read: IOPS=4724, BW=18.5MiB/s (19.3MB/s)(185MiB/10001msec) 00:19:54.530 slat (nsec): min=7069, max=62905, avg=14119.36, stdev=4597.04 00:19:54.530 clat (usec): min=427, max=3147, avg=805.66, stdev=58.29 00:19:54.530 lat (usec): min=437, max=3163, avg=819.78, stdev=58.95 00:19:54.530 clat percentiles (usec): 00:19:54.530 | 1.00th=[ 734], 5.00th=[ 750], 10.00th=[ 766], 20.00th=[ 775], 00:19:54.530 | 30.00th=[ 783], 40.00th=[ 791], 50.00th=[ 799], 60.00th=[ 807], 00:19:54.530 | 70.00th=[ 816], 80.00th=[ 824], 90.00th=[ 840], 95.00th=[ 865], 00:19:54.530 | 99.00th=[ 1012], 99.50th=[ 1037], 99.90th=[ 1352], 99.95th=[ 1713], 00:19:54.530 | 99.99th=[ 1876] 00:19:54.530 bw ( KiB/s): min=17696, max=19264, per=50.02%, avg=18900.47, stdev=433.58, samples=19 00:19:54.530 iops : min= 4424, max= 4816, avg=4725.11, stdev=108.39, samples=19 00:19:54.530 lat (usec) : 500=0.06%, 750=3.90%, 1000=94.64% 00:19:54.530 lat (msec) : 2=1.39%, 4=0.01% 00:19:54.530 cpu : usr=87.92%, sys=10.40%, ctx=961, majf=0, minf=0 00:19:54.530 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:54.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.530 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.530 issued rwts: total=47248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:54.530 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:54.530 00:19:54.530 Run status group 0 (all jobs): 00:19:54.530 READ: bw=36.9MiB/s (38.7MB/s), 18.4MiB/s-18.5MiB/s (19.3MB/s-19.3MB/s), io=369MiB (387MB), run=10001-10001msec 00:19:54.530 10:58:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:19:54.530 10:58:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:19:54.530 10:58:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:19:54.530 10:58:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:54.530 10:58:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:19:54.530 10:58:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:54.530 10:58:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.530 10:58:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:54.530 10:58:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.530 10:58:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:54.530 10:58:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.530 10:58:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:54.530 10:58:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.530 10:58:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:19:54.530 10:58:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:54.530 10:58:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:19:54.530 10:58:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:54.530 10:58:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.530 10:58:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:54.530 10:58:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.530 10:58:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:54.530 10:58:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.530 10:58:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:54.530 10:58:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.530 00:19:54.530 real 0m11.156s 00:19:54.530 user 0m18.543s 00:19:54.530 sys 0m2.201s 00:19:54.530 10:58:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:54.530 10:58:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:54.530 ************************************ 00:19:54.530 END TEST fio_dif_1_multi_subsystems 00:19:54.530 ************************************ 00:19:54.530 10:58:24 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:19:54.530 10:58:24 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:54.530 10:58:24 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:54.530 10:58:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:54.530 ************************************ 00:19:54.530 START TEST fio_dif_rand_params 00:19:54.530 ************************************ 00:19:54.530 10:58:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:19:54.530 10:58:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:19:54.530 10:58:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:19:54.530 10:58:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:19:54.530 10:58:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:19:54.530 10:58:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:19:54.530 10:58:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:19:54.530 10:58:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:19:54.530 10:58:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:19:54.530 10:58:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:19:54.530 10:58:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:54.531 bdev_null0 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:54.531 [2024-07-25 10:58:24.124127] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.531 { 00:19:54.531 "params": { 00:19:54.531 "name": "Nvme$subsystem", 00:19:54.531 "trtype": "$TEST_TRANSPORT", 00:19:54.531 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.531 "adrfam": "ipv4", 00:19:54.531 "trsvcid": "$NVMF_PORT", 00:19:54.531 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.531 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.531 "hdgst": ${hdgst:-false}, 00:19:54.531 "ddgst": ${ddgst:-false} 00:19:54.531 }, 00:19:54.531 "method": "bdev_nvme_attach_controller" 00:19:54.531 } 00:19:54.531 EOF 00:19:54.531 )") 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:54.531 "params": { 00:19:54.531 "name": "Nvme0", 00:19:54.531 "trtype": "tcp", 00:19:54.531 "traddr": "10.0.0.2", 00:19:54.531 "adrfam": "ipv4", 00:19:54.531 "trsvcid": "4420", 00:19:54.531 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:54.531 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:54.531 "hdgst": false, 00:19:54.531 "ddgst": false 00:19:54.531 }, 00:19:54.531 "method": "bdev_nvme_attach_controller" 00:19:54.531 }' 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:54.531 10:58:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:54.790 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:19:54.790 ... 00:19:54.790 fio-3.35 00:19:54.790 Starting 3 threads 00:20:01.355 00:20:01.355 filename0: (groupid=0, jobs=1): err= 0: pid=82859: Thu Jul 25 10:58:29 2024 00:20:01.355 read: IOPS=260, BW=32.5MiB/s (34.1MB/s)(163MiB/5001msec) 00:20:01.355 slat (nsec): min=7612, max=38145, avg=10761.26, stdev=3881.12 00:20:01.355 clat (usec): min=9826, max=13026, avg=11494.91, stdev=174.92 00:20:01.355 lat (usec): min=9834, max=13040, avg=11505.67, stdev=175.19 00:20:01.355 clat percentiles (usec): 00:20:01.355 | 1.00th=[11338], 5.00th=[11469], 10.00th=[11469], 20.00th=[11469], 00:20:01.355 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11469], 60.00th=[11469], 00:20:01.355 | 70.00th=[11469], 80.00th=[11469], 90.00th=[11600], 95.00th=[11863], 00:20:01.355 | 99.00th=[11994], 99.50th=[12125], 99.90th=[13042], 99.95th=[13042], 00:20:01.355 | 99.99th=[13042] 00:20:01.355 bw ( KiB/s): min=33024, max=33792, per=33.37%, avg=33365.33, stdev=404.77, samples=9 00:20:01.355 iops : min= 258, max= 264, avg=260.67, stdev= 3.16, samples=9 00:20:01.355 lat (msec) : 10=0.23%, 20=99.77% 00:20:01.355 cpu : usr=89.98%, sys=9.32%, ctx=54, majf=0, minf=9 00:20:01.355 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:01.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:01.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:01.355 issued rwts: total=1302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:01.355 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:01.355 filename0: (groupid=0, jobs=1): err= 0: pid=82860: Thu Jul 25 10:58:29 2024 00:20:01.355 read: IOPS=260, BW=32.6MiB/s (34.2MB/s)(163MiB/5005msec) 00:20:01.355 slat (nsec): min=7565, max=33834, avg=9684.81, stdev=2600.10 00:20:01.355 clat (usec): min=4879, max=13555, avg=11480.89, stdev=390.94 00:20:01.355 lat (usec): min=4887, max=13580, avg=11490.58, stdev=390.87 00:20:01.355 clat percentiles (usec): 00:20:01.355 | 1.00th=[11338], 5.00th=[11469], 10.00th=[11469], 20.00th=[11469], 00:20:01.355 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11469], 60.00th=[11469], 00:20:01.355 | 70.00th=[11469], 80.00th=[11469], 90.00th=[11600], 95.00th=[11863], 00:20:01.355 | 99.00th=[11994], 99.50th=[12125], 99.90th=[13566], 99.95th=[13566], 00:20:01.355 | 99.99th=[13566] 00:20:01.355 bw ( KiB/s): min=33024, max=33792, per=33.30%, avg=33287.33, stdev=379.10, samples=9 00:20:01.355 iops : min= 258, max= 264, avg=260.00, stdev= 3.00, samples=9 00:20:01.355 lat (msec) : 10=0.46%, 20=99.54% 00:20:01.355 cpu : usr=91.91%, sys=7.55%, ctx=11, majf=0, minf=9 00:20:01.355 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:01.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:01.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:01.355 issued rwts: total=1305,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:01.355 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:01.355 filename0: (groupid=0, jobs=1): err= 0: pid=82861: Thu Jul 25 10:58:29 2024 00:20:01.355 read: IOPS=260, BW=32.5MiB/s (34.1MB/s)(163MiB/5002msec) 00:20:01.355 slat (nsec): min=7562, max=60781, avg=10018.08, stdev=3146.80 00:20:01.355 clat (usec): min=11371, max=12988, avg=11499.57, stdev=139.84 00:20:01.355 lat (usec): min=11383, max=13001, avg=11509.59, stdev=140.15 00:20:01.355 clat percentiles (usec): 00:20:01.355 | 1.00th=[11338], 5.00th=[11469], 10.00th=[11469], 20.00th=[11469], 00:20:01.355 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11469], 60.00th=[11469], 00:20:01.355 | 70.00th=[11469], 80.00th=[11469], 90.00th=[11600], 95.00th=[11863], 00:20:01.355 | 99.00th=[11994], 99.50th=[12125], 99.90th=[13042], 99.95th=[13042], 00:20:01.355 | 99.99th=[13042] 00:20:01.355 bw ( KiB/s): min=33024, max=33792, per=33.37%, avg=33365.33, stdev=404.77, samples=9 00:20:01.355 iops : min= 258, max= 264, avg=260.67, stdev= 3.16, samples=9 00:20:01.355 lat (msec) : 20=100.00% 00:20:01.355 cpu : usr=91.58%, sys=7.86%, ctx=7, majf=0, minf=9 00:20:01.355 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:01.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:01.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:01.355 issued rwts: total=1302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:01.355 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:01.355 00:20:01.355 Run status group 0 (all jobs): 00:20:01.355 READ: bw=97.6MiB/s (102MB/s), 32.5MiB/s-32.6MiB/s (34.1MB/s-34.2MB/s), io=489MiB (512MB), run=5001-5005msec 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:01.355 bdev_null0 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:01.355 [2024-07-25 10:58:30.142100] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:01.355 bdev_null1 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.355 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:01.356 bdev_null2 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:01.356 { 00:20:01.356 "params": { 00:20:01.356 "name": "Nvme$subsystem", 00:20:01.356 "trtype": "$TEST_TRANSPORT", 00:20:01.356 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:01.356 "adrfam": "ipv4", 00:20:01.356 "trsvcid": "$NVMF_PORT", 00:20:01.356 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:01.356 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:01.356 "hdgst": ${hdgst:-false}, 00:20:01.356 "ddgst": ${ddgst:-false} 00:20:01.356 }, 00:20:01.356 "method": "bdev_nvme_attach_controller" 00:20:01.356 } 00:20:01.356 EOF 00:20:01.356 )") 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:01.356 { 00:20:01.356 "params": { 00:20:01.356 "name": "Nvme$subsystem", 00:20:01.356 "trtype": "$TEST_TRANSPORT", 00:20:01.356 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:01.356 "adrfam": "ipv4", 00:20:01.356 "trsvcid": "$NVMF_PORT", 00:20:01.356 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:01.356 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:01.356 "hdgst": ${hdgst:-false}, 00:20:01.356 "ddgst": ${ddgst:-false} 00:20:01.356 }, 00:20:01.356 "method": "bdev_nvme_attach_controller" 00:20:01.356 } 00:20:01.356 EOF 00:20:01.356 )") 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:01.356 { 00:20:01.356 "params": { 00:20:01.356 "name": "Nvme$subsystem", 00:20:01.356 "trtype": "$TEST_TRANSPORT", 00:20:01.356 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:01.356 "adrfam": "ipv4", 00:20:01.356 "trsvcid": "$NVMF_PORT", 00:20:01.356 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:01.356 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:01.356 "hdgst": ${hdgst:-false}, 00:20:01.356 "ddgst": ${ddgst:-false} 00:20:01.356 }, 00:20:01.356 "method": "bdev_nvme_attach_controller" 00:20:01.356 } 00:20:01.356 EOF 00:20:01.356 )") 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:01.356 "params": { 00:20:01.356 "name": "Nvme0", 00:20:01.356 "trtype": "tcp", 00:20:01.356 "traddr": "10.0.0.2", 00:20:01.356 "adrfam": "ipv4", 00:20:01.356 "trsvcid": "4420", 00:20:01.356 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:01.356 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:01.356 "hdgst": false, 00:20:01.356 "ddgst": false 00:20:01.356 }, 00:20:01.356 "method": "bdev_nvme_attach_controller" 00:20:01.356 },{ 00:20:01.356 "params": { 00:20:01.356 "name": "Nvme1", 00:20:01.356 "trtype": "tcp", 00:20:01.356 "traddr": "10.0.0.2", 00:20:01.356 "adrfam": "ipv4", 00:20:01.356 "trsvcid": "4420", 00:20:01.356 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:01.356 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:01.356 "hdgst": false, 00:20:01.356 "ddgst": false 00:20:01.356 }, 00:20:01.356 "method": "bdev_nvme_attach_controller" 00:20:01.356 },{ 00:20:01.356 "params": { 00:20:01.356 "name": "Nvme2", 00:20:01.356 "trtype": "tcp", 00:20:01.356 "traddr": "10.0.0.2", 00:20:01.356 "adrfam": "ipv4", 00:20:01.356 "trsvcid": "4420", 00:20:01.356 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:01.356 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:01.356 "hdgst": false, 00:20:01.356 "ddgst": false 00:20:01.356 }, 00:20:01.356 "method": "bdev_nvme_attach_controller" 00:20:01.356 }' 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:01.356 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:01.357 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:01.357 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:01.357 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:01.357 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:01.357 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:01.357 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:01.357 10:58:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:01.357 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:01.357 ... 00:20:01.357 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:01.357 ... 00:20:01.357 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:01.357 ... 00:20:01.357 fio-3.35 00:20:01.357 Starting 24 threads 00:20:13.562 00:20:13.562 filename0: (groupid=0, jobs=1): err= 0: pid=82957: Thu Jul 25 10:58:41 2024 00:20:13.562 read: IOPS=216, BW=868KiB/s (889kB/s)(8708KiB/10036msec) 00:20:13.562 slat (usec): min=7, max=8020, avg=18.08, stdev=171.70 00:20:13.562 clat (msec): min=32, max=131, avg=73.61, stdev=18.65 00:20:13.562 lat (msec): min=32, max=131, avg=73.63, stdev=18.66 00:20:13.562 clat percentiles (msec): 00:20:13.562 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 59], 00:20:13.562 | 30.00th=[ 62], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 74], 00:20:13.562 | 70.00th=[ 84], 80.00th=[ 95], 90.00th=[ 99], 95.00th=[ 108], 00:20:13.562 | 99.00th=[ 111], 99.50th=[ 116], 99.90th=[ 127], 99.95th=[ 132], 00:20:13.562 | 99.99th=[ 132] 00:20:13.562 bw ( KiB/s): min= 712, max= 976, per=4.23%, avg=864.40, stdev=90.23, samples=20 00:20:13.562 iops : min= 178, max= 244, avg=216.10, stdev=22.56, samples=20 00:20:13.562 lat (msec) : 50=17.32%, 100=74.23%, 250=8.45% 00:20:13.562 cpu : usr=31.38%, sys=1.82%, ctx=855, majf=0, minf=9 00:20:13.562 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=82.6%, 16=16.7%, 32=0.0%, >=64=0.0% 00:20:13.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.562 complete : 0=0.0%, 4=87.6%, 8=12.2%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.562 issued rwts: total=2177,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.562 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:13.562 filename0: (groupid=0, jobs=1): err= 0: pid=82958: Thu Jul 25 10:58:41 2024 00:20:13.562 read: IOPS=204, BW=818KiB/s (837kB/s)(8220KiB/10055msec) 00:20:13.562 slat (usec): min=3, max=5033, avg=16.76, stdev=141.09 00:20:13.562 clat (usec): min=1126, max=154899, avg=78033.46, stdev=26178.13 00:20:13.562 lat (usec): min=1134, max=154913, avg=78050.22, stdev=26177.63 00:20:13.562 clat percentiles (msec): 00:20:13.562 | 1.00th=[ 5], 5.00th=[ 39], 10.00th=[ 48], 20.00th=[ 61], 00:20:13.562 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 84], 00:20:13.562 | 70.00th=[ 93], 80.00th=[ 97], 90.00th=[ 109], 95.00th=[ 121], 00:20:13.562 | 99.00th=[ 144], 99.50th=[ 155], 99.90th=[ 155], 99.95th=[ 155], 00:20:13.562 | 99.99th=[ 155] 00:20:13.562 bw ( KiB/s): min= 528, max= 1280, per=4.00%, avg=817.75, stdev=170.12, samples=20 00:20:13.562 iops : min= 132, max= 320, avg=204.40, stdev=42.54, samples=20 00:20:13.562 lat (msec) : 2=0.10%, 10=3.80%, 50=9.98%, 100=70.17%, 250=15.96% 00:20:13.562 cpu : usr=35.82%, sys=2.04%, ctx=1202, majf=0, minf=0 00:20:13.562 IO depths : 1=0.1%, 2=3.2%, 4=12.4%, 8=69.5%, 16=14.7%, 32=0.0%, >=64=0.0% 00:20:13.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.562 complete : 0=0.0%, 4=91.0%, 8=6.3%, 16=2.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.562 issued rwts: total=2055,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.562 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:13.562 filename0: (groupid=0, jobs=1): err= 0: pid=82959: Thu Jul 25 10:58:41 2024 00:20:13.562 read: IOPS=203, BW=815KiB/s (834kB/s)(8156KiB/10012msec) 00:20:13.562 slat (usec): min=4, max=7034, avg=29.20, stdev=286.99 00:20:13.562 clat (msec): min=16, max=185, avg=78.37, stdev=23.59 00:20:13.562 lat (msec): min=16, max=185, avg=78.40, stdev=23.58 00:20:13.562 clat percentiles (msec): 00:20:13.562 | 1.00th=[ 35], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 58], 00:20:13.562 | 30.00th=[ 68], 40.00th=[ 71], 50.00th=[ 75], 60.00th=[ 83], 00:20:13.562 | 70.00th=[ 92], 80.00th=[ 97], 90.00th=[ 108], 95.00th=[ 121], 00:20:13.562 | 99.00th=[ 138], 99.50th=[ 155], 99.90th=[ 157], 99.95th=[ 186], 00:20:13.562 | 99.99th=[ 186] 00:20:13.562 bw ( KiB/s): min= 496, max= 1024, per=3.90%, avg=797.68, stdev=176.74, samples=19 00:20:13.562 iops : min= 124, max= 256, avg=199.42, stdev=44.19, samples=19 00:20:13.562 lat (msec) : 20=0.49%, 50=13.39%, 100=70.03%, 250=16.09% 00:20:13.562 cpu : usr=40.92%, sys=2.12%, ctx=1365, majf=0, minf=9 00:20:13.563 IO depths : 1=0.1%, 2=2.9%, 4=11.7%, 8=71.0%, 16=14.3%, 32=0.0%, >=64=0.0% 00:20:13.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.563 complete : 0=0.0%, 4=90.3%, 8=7.1%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.563 issued rwts: total=2039,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.563 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:13.563 filename0: (groupid=0, jobs=1): err= 0: pid=82960: Thu Jul 25 10:58:41 2024 00:20:13.563 read: IOPS=209, BW=837KiB/s (857kB/s)(8404KiB/10036msec) 00:20:13.563 slat (usec): min=7, max=8023, avg=22.92, stdev=235.30 00:20:13.563 clat (msec): min=35, max=151, avg=76.20, stdev=21.34 00:20:13.563 lat (msec): min=35, max=151, avg=76.22, stdev=21.34 00:20:13.563 clat percentiles (msec): 00:20:13.563 | 1.00th=[ 41], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 56], 00:20:13.563 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 74], 60.00th=[ 81], 00:20:13.563 | 70.00th=[ 88], 80.00th=[ 96], 90.00th=[ 106], 95.00th=[ 111], 00:20:13.563 | 99.00th=[ 131], 99.50th=[ 150], 99.90th=[ 150], 99.95th=[ 153], 00:20:13.563 | 99.99th=[ 153] 00:20:13.563 bw ( KiB/s): min= 512, max= 976, per=4.09%, avg=836.80, stdev=165.72, samples=20 00:20:13.563 iops : min= 128, max= 244, avg=209.20, stdev=41.43, samples=20 00:20:13.563 lat (msec) : 50=14.28%, 100=69.21%, 250=16.52% 00:20:13.563 cpu : usr=42.94%, sys=2.68%, ctx=1186, majf=0, minf=9 00:20:13.563 IO depths : 1=0.1%, 2=2.0%, 4=7.9%, 8=74.9%, 16=15.1%, 32=0.0%, >=64=0.0% 00:20:13.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.563 complete : 0=0.0%, 4=89.4%, 8=8.8%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.563 issued rwts: total=2101,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.563 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:13.563 filename0: (groupid=0, jobs=1): err= 0: pid=82961: Thu Jul 25 10:58:41 2024 00:20:13.563 read: IOPS=197, BW=789KiB/s (808kB/s)(7892KiB/10005msec) 00:20:13.563 slat (usec): min=3, max=8025, avg=22.61, stdev=267.30 00:20:13.563 clat (msec): min=7, max=215, avg=80.94, stdev=24.19 00:20:13.563 lat (msec): min=7, max=215, avg=80.97, stdev=24.19 00:20:13.563 clat percentiles (msec): 00:20:13.563 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 62], 00:20:13.563 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 86], 00:20:13.563 | 70.00th=[ 93], 80.00th=[ 99], 90.00th=[ 108], 95.00th=[ 127], 00:20:13.563 | 99.00th=[ 142], 99.50th=[ 174], 99.90th=[ 215], 99.95th=[ 215], 00:20:13.563 | 99.99th=[ 215] 00:20:13.563 bw ( KiB/s): min= 509, max= 1000, per=3.76%, avg=768.68, stdev=159.60, samples=19 00:20:13.563 iops : min= 127, max= 250, avg=192.16, stdev=39.92, samples=19 00:20:13.563 lat (msec) : 10=0.15%, 20=0.61%, 50=11.30%, 100=68.37%, 250=19.56% 00:20:13.563 cpu : usr=31.26%, sys=1.95%, ctx=1038, majf=0, minf=9 00:20:13.563 IO depths : 1=0.1%, 2=4.0%, 4=15.8%, 8=66.5%, 16=13.7%, 32=0.0%, >=64=0.0% 00:20:13.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.563 complete : 0=0.0%, 4=91.5%, 8=5.0%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.563 issued rwts: total=1973,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.563 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:13.563 filename0: (groupid=0, jobs=1): err= 0: pid=82962: Thu Jul 25 10:58:41 2024 00:20:13.563 read: IOPS=227, BW=910KiB/s (932kB/s)(9144KiB/10045msec) 00:20:13.563 slat (usec): min=3, max=4026, avg=16.78, stdev=118.76 00:20:13.563 clat (msec): min=2, max=127, avg=70.12, stdev=22.15 00:20:13.563 lat (msec): min=2, max=127, avg=70.14, stdev=22.15 00:20:13.563 clat percentiles (msec): 00:20:13.563 | 1.00th=[ 5], 5.00th=[ 37], 10.00th=[ 45], 20.00th=[ 50], 00:20:13.563 | 30.00th=[ 59], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 77], 00:20:13.563 | 70.00th=[ 83], 80.00th=[ 89], 90.00th=[ 100], 95.00th=[ 105], 00:20:13.563 | 99.00th=[ 111], 99.50th=[ 113], 99.90th=[ 121], 99.95th=[ 124], 00:20:13.563 | 99.99th=[ 128] 00:20:13.563 bw ( KiB/s): min= 688, max= 1288, per=4.45%, avg=910.00, stdev=148.40, samples=20 00:20:13.563 iops : min= 172, max= 322, avg=227.50, stdev=37.10, samples=20 00:20:13.563 lat (msec) : 4=0.61%, 10=1.49%, 20=0.70%, 50=18.11%, 100=70.95% 00:20:13.563 lat (msec) : 250=8.14% 00:20:13.563 cpu : usr=43.00%, sys=2.85%, ctx=1256, majf=0, minf=9 00:20:13.563 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.3%, 16=16.4%, 32=0.0%, >=64=0.0% 00:20:13.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.563 complete : 0=0.0%, 4=87.6%, 8=12.2%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.563 issued rwts: total=2286,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.563 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:13.563 filename0: (groupid=0, jobs=1): err= 0: pid=82963: Thu Jul 25 10:58:41 2024 00:20:13.563 read: IOPS=200, BW=804KiB/s (823kB/s)(8048KiB/10010msec) 00:20:13.563 slat (usec): min=3, max=4037, avg=18.33, stdev=126.81 00:20:13.563 clat (msec): min=13, max=179, avg=79.47, stdev=21.61 00:20:13.563 lat (msec): min=13, max=179, avg=79.48, stdev=21.62 00:20:13.563 clat percentiles (msec): 00:20:13.563 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 64], 00:20:13.563 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 84], 00:20:13.563 | 70.00th=[ 91], 80.00th=[ 97], 90.00th=[ 106], 95.00th=[ 111], 00:20:13.563 | 99.00th=[ 140], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 180], 00:20:13.563 | 99.99th=[ 180] 00:20:13.563 bw ( KiB/s): min= 496, max= 1000, per=3.85%, avg=786.42, stdev=146.03, samples=19 00:20:13.563 iops : min= 124, max= 250, avg=196.58, stdev=36.54, samples=19 00:20:13.563 lat (msec) : 20=0.35%, 50=10.04%, 100=73.31%, 250=16.30% 00:20:13.563 cpu : usr=42.25%, sys=2.27%, ctx=1317, majf=0, minf=9 00:20:13.563 IO depths : 1=0.1%, 2=4.1%, 4=16.3%, 8=66.0%, 16=13.6%, 32=0.0%, >=64=0.0% 00:20:13.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.563 complete : 0=0.0%, 4=91.6%, 8=4.8%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.563 issued rwts: total=2012,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.563 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:13.563 filename0: (groupid=0, jobs=1): err= 0: pid=82964: Thu Jul 25 10:58:41 2024 00:20:13.563 read: IOPS=219, BW=876KiB/s (898kB/s)(8800KiB/10040msec) 00:20:13.563 slat (usec): min=6, max=8025, avg=24.08, stdev=295.71 00:20:13.563 clat (msec): min=23, max=143, avg=72.88, stdev=18.99 00:20:13.563 lat (msec): min=23, max=143, avg=72.90, stdev=18.99 00:20:13.563 clat percentiles (msec): 00:20:13.563 | 1.00th=[ 34], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 56], 00:20:13.563 | 30.00th=[ 62], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 75], 00:20:13.563 | 70.00th=[ 84], 80.00th=[ 92], 90.00th=[ 99], 95.00th=[ 106], 00:20:13.563 | 99.00th=[ 116], 99.50th=[ 121], 99.90th=[ 134], 99.95th=[ 138], 00:20:13.563 | 99.99th=[ 144] 00:20:13.563 bw ( KiB/s): min= 712, max= 1048, per=4.27%, avg=873.60, stdev=83.20, samples=20 00:20:13.563 iops : min= 178, max= 262, avg=218.40, stdev=20.80, samples=20 00:20:13.563 lat (msec) : 50=16.91%, 100=75.18%, 250=7.91% 00:20:13.563 cpu : usr=35.35%, sys=2.14%, ctx=1080, majf=0, minf=9 00:20:13.563 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=82.7%, 16=16.6%, 32=0.0%, >=64=0.0% 00:20:13.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.563 complete : 0=0.0%, 4=87.7%, 8=12.2%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.563 issued rwts: total=2200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.563 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:13.563 filename1: (groupid=0, jobs=1): err= 0: pid=82965: Thu Jul 25 10:58:41 2024 00:20:13.563 read: IOPS=213, BW=855KiB/s (876kB/s)(8556KiB/10007msec) 00:20:13.563 slat (usec): min=4, max=8024, avg=22.36, stdev=212.30 00:20:13.563 clat (msec): min=7, max=176, avg=74.70, stdev=22.56 00:20:13.563 lat (msec): min=7, max=176, avg=74.73, stdev=22.56 00:20:13.563 clat percentiles (msec): 00:20:13.563 | 1.00th=[ 28], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 52], 00:20:13.563 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 81], 00:20:13.563 | 70.00th=[ 87], 80.00th=[ 96], 90.00th=[ 105], 95.00th=[ 108], 00:20:13.563 | 99.00th=[ 142], 99.50th=[ 153], 99.90th=[ 153], 99.95th=[ 178], 00:20:13.563 | 99.99th=[ 178] 00:20:13.563 bw ( KiB/s): min= 507, max= 1024, per=4.10%, avg=837.21, stdev=176.42, samples=19 00:20:13.563 iops : min= 126, max= 256, avg=209.26, stdev=44.18, samples=19 00:20:13.563 lat (msec) : 10=0.28%, 20=0.28%, 50=17.44%, 100=70.87%, 250=11.13% 00:20:13.563 cpu : usr=43.96%, sys=2.21%, ctx=1002, majf=0, minf=9 00:20:13.563 IO depths : 1=0.1%, 2=2.4%, 4=9.7%, 8=73.4%, 16=14.4%, 32=0.0%, >=64=0.0% 00:20:13.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.563 complete : 0=0.0%, 4=89.5%, 8=8.4%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.563 issued rwts: total=2139,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.563 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:13.563 filename1: (groupid=0, jobs=1): err= 0: pid=82966: Thu Jul 25 10:58:41 2024 00:20:13.563 read: IOPS=210, BW=842KiB/s (862kB/s)(8436KiB/10024msec) 00:20:13.563 slat (usec): min=4, max=8039, avg=27.47, stdev=303.07 00:20:13.563 clat (msec): min=36, max=152, avg=75.81, stdev=20.22 00:20:13.563 lat (msec): min=36, max=152, avg=75.83, stdev=20.23 00:20:13.563 clat percentiles (msec): 00:20:13.563 | 1.00th=[ 40], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 58], 00:20:13.563 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 81], 00:20:13.563 | 70.00th=[ 88], 80.00th=[ 96], 90.00th=[ 105], 95.00th=[ 108], 00:20:13.563 | 99.00th=[ 121], 99.50th=[ 129], 99.90th=[ 130], 99.95th=[ 153], 00:20:13.563 | 99.99th=[ 153] 00:20:13.563 bw ( KiB/s): min= 624, max= 1015, per=4.10%, avg=838.80, stdev=148.55, samples=20 00:20:13.563 iops : min= 156, max= 253, avg=209.65, stdev=37.10, samples=20 00:20:13.564 lat (msec) : 50=14.41%, 100=72.17%, 250=13.42% 00:20:13.564 cpu : usr=37.13%, sys=2.23%, ctx=1064, majf=0, minf=9 00:20:13.564 IO depths : 1=0.1%, 2=2.4%, 4=9.6%, 8=73.1%, 16=14.8%, 32=0.0%, >=64=0.0% 00:20:13.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.564 complete : 0=0.0%, 4=89.8%, 8=8.1%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.564 issued rwts: total=2109,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.564 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:13.564 filename1: (groupid=0, jobs=1): err= 0: pid=82967: Thu Jul 25 10:58:41 2024 00:20:13.564 read: IOPS=219, BW=877KiB/s (898kB/s)(8800KiB/10036msec) 00:20:13.564 slat (usec): min=4, max=8026, avg=31.32, stdev=317.53 00:20:13.564 clat (msec): min=22, max=153, avg=72.78, stdev=19.74 00:20:13.564 lat (msec): min=22, max=153, avg=72.82, stdev=19.74 00:20:13.564 clat percentiles (msec): 00:20:13.564 | 1.00th=[ 34], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 53], 00:20:13.564 | 30.00th=[ 62], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 77], 00:20:13.564 | 70.00th=[ 84], 80.00th=[ 93], 90.00th=[ 102], 95.00th=[ 107], 00:20:13.564 | 99.00th=[ 111], 99.50th=[ 117], 99.90th=[ 127], 99.95th=[ 155], 00:20:13.564 | 99.99th=[ 155] 00:20:13.564 bw ( KiB/s): min= 640, max= 1024, per=4.27%, avg=873.60, stdev=119.92, samples=20 00:20:13.564 iops : min= 160, max= 256, avg=218.40, stdev=29.98, samples=20 00:20:13.564 lat (msec) : 50=16.86%, 100=72.68%, 250=10.45% 00:20:13.564 cpu : usr=38.72%, sys=1.90%, ctx=1427, majf=0, minf=10 00:20:13.564 IO depths : 1=0.1%, 2=0.5%, 4=2.0%, 8=81.4%, 16=16.1%, 32=0.0%, >=64=0.0% 00:20:13.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.564 complete : 0=0.0%, 4=87.8%, 8=11.8%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.564 issued rwts: total=2200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.564 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:13.564 filename1: (groupid=0, jobs=1): err= 0: pid=82968: Thu Jul 25 10:58:41 2024 00:20:13.564 read: IOPS=195, BW=783KiB/s (802kB/s)(7860KiB/10036msec) 00:20:13.564 slat (usec): min=5, max=8030, avg=38.06, stdev=443.63 00:20:13.564 clat (msec): min=36, max=142, avg=81.40, stdev=20.02 00:20:13.564 lat (msec): min=36, max=142, avg=81.44, stdev=20.03 00:20:13.564 clat percentiles (msec): 00:20:13.564 | 1.00th=[ 46], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 64], 00:20:13.564 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 82], 60.00th=[ 85], 00:20:13.564 | 70.00th=[ 95], 80.00th=[ 99], 90.00th=[ 107], 95.00th=[ 111], 00:20:13.564 | 99.00th=[ 134], 99.50th=[ 136], 99.90th=[ 144], 99.95th=[ 144], 00:20:13.564 | 99.99th=[ 144] 00:20:13.564 bw ( KiB/s): min= 512, max= 968, per=3.81%, avg=779.60, stdev=138.36, samples=20 00:20:13.564 iops : min= 128, max= 242, avg=194.90, stdev=34.59, samples=20 00:20:13.564 lat (msec) : 50=10.59%, 100=71.65%, 250=17.76% 00:20:13.564 cpu : usr=34.02%, sys=1.83%, ctx=964, majf=0, minf=9 00:20:13.564 IO depths : 1=0.1%, 2=3.1%, 4=12.4%, 8=69.7%, 16=14.8%, 32=0.0%, >=64=0.0% 00:20:13.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.564 complete : 0=0.0%, 4=91.0%, 8=6.3%, 16=2.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.564 issued rwts: total=1965,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.564 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:13.564 filename1: (groupid=0, jobs=1): err= 0: pid=82969: Thu Jul 25 10:58:41 2024 00:20:13.564 read: IOPS=223, BW=893KiB/s (915kB/s)(8952KiB/10020msec) 00:20:13.564 slat (usec): min=4, max=8057, avg=25.15, stdev=293.60 00:20:13.564 clat (msec): min=23, max=203, avg=71.48, stdev=21.05 00:20:13.564 lat (msec): min=23, max=203, avg=71.50, stdev=21.06 00:20:13.564 clat percentiles (msec): 00:20:13.564 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 48], 00:20:13.564 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 73], 00:20:13.564 | 70.00th=[ 82], 80.00th=[ 88], 90.00th=[ 97], 95.00th=[ 107], 00:20:13.564 | 99.00th=[ 116], 99.50th=[ 169], 99.90th=[ 169], 99.95th=[ 203], 00:20:13.564 | 99.99th=[ 203] 00:20:13.564 bw ( KiB/s): min= 512, max= 1024, per=4.36%, avg=891.05, stdev=130.77, samples=20 00:20:13.564 iops : min= 128, max= 256, avg=222.70, stdev=32.64, samples=20 00:20:13.564 lat (msec) : 50=23.68%, 100=68.95%, 250=7.37% 00:20:13.564 cpu : usr=31.37%, sys=1.88%, ctx=858, majf=0, minf=9 00:20:13.564 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=82.2%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:13.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.564 complete : 0=0.0%, 4=87.3%, 8=12.4%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.564 issued rwts: total=2238,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.564 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:13.564 filename1: (groupid=0, jobs=1): err= 0: pid=82970: Thu Jul 25 10:58:41 2024 00:20:13.564 read: IOPS=217, BW=871KiB/s (892kB/s)(8712KiB/10004msec) 00:20:13.564 slat (usec): min=7, max=8035, avg=21.88, stdev=243.02 00:20:13.564 clat (msec): min=13, max=215, avg=73.38, stdev=22.49 00:20:13.564 lat (msec): min=13, max=215, avg=73.40, stdev=22.48 00:20:13.564 clat percentiles (msec): 00:20:13.564 | 1.00th=[ 35], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 50], 00:20:13.564 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 75], 00:20:13.564 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 104], 95.00th=[ 108], 00:20:13.564 | 99.00th=[ 121], 99.50th=[ 176], 99.90th=[ 176], 99.95th=[ 215], 00:20:13.564 | 99.99th=[ 215] 00:20:13.564 bw ( KiB/s): min= 509, max= 1024, per=4.20%, avg=858.79, stdev=170.82, samples=19 00:20:13.564 iops : min= 127, max= 256, avg=214.68, stdev=42.73, samples=19 00:20:13.564 lat (msec) : 20=0.32%, 50=20.25%, 100=67.81%, 250=11.62% 00:20:13.564 cpu : usr=35.43%, sys=2.13%, ctx=1035, majf=0, minf=9 00:20:13.564 IO depths : 1=0.1%, 2=2.0%, 4=7.7%, 8=75.6%, 16=14.7%, 32=0.0%, >=64=0.0% 00:20:13.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.564 complete : 0=0.0%, 4=89.0%, 8=9.4%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.564 issued rwts: total=2178,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.564 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:13.564 filename1: (groupid=0, jobs=1): err= 0: pid=82971: Thu Jul 25 10:58:41 2024 00:20:13.564 read: IOPS=229, BW=918KiB/s (940kB/s)(9176KiB/10001msec) 00:20:13.564 slat (usec): min=7, max=8044, avg=32.62, stdev=378.30 00:20:13.564 clat (usec): min=822, max=217594, avg=69627.60, stdev=30092.07 00:20:13.564 lat (usec): min=830, max=217616, avg=69660.22, stdev=30100.51 00:20:13.564 clat percentiles (usec): 00:20:13.564 | 1.00th=[ 1090], 5.00th=[ 1876], 10.00th=[ 38536], 20.00th=[ 47973], 00:20:13.564 | 30.00th=[ 58983], 40.00th=[ 63701], 50.00th=[ 71828], 60.00th=[ 74974], 00:20:13.564 | 70.00th=[ 83362], 80.00th=[ 95945], 90.00th=[103285], 95.00th=[108528], 00:20:13.564 | 99.00th=[131597], 99.50th=[193987], 99.90th=[193987], 99.95th=[210764], 00:20:13.564 | 99.99th=[217056] 00:20:13.564 bw ( KiB/s): min= 496, max= 1024, per=4.08%, avg=834.11, stdev=178.56, samples=19 00:20:13.564 iops : min= 124, max= 256, avg=208.53, stdev=44.64, samples=19 00:20:13.564 lat (usec) : 1000=0.78% 00:20:13.564 lat (msec) : 2=5.10%, 4=1.57%, 10=0.26%, 20=0.13%, 50=16.48% 00:20:13.564 lat (msec) : 100=64.47%, 250=11.20% 00:20:13.564 cpu : usr=35.65%, sys=2.04%, ctx=1020, majf=0, minf=9 00:20:13.564 IO depths : 1=0.1%, 2=2.2%, 4=8.7%, 8=74.2%, 16=14.9%, 32=0.0%, >=64=0.0% 00:20:13.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.564 complete : 0=0.0%, 4=89.5%, 8=8.6%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.564 issued rwts: total=2294,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.564 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:13.564 filename1: (groupid=0, jobs=1): err= 0: pid=82973: Thu Jul 25 10:58:41 2024 00:20:13.564 read: IOPS=225, BW=903KiB/s (925kB/s)(9048KiB/10015msec) 00:20:13.564 slat (usec): min=4, max=8032, avg=20.38, stdev=191.69 00:20:13.564 clat (msec): min=16, max=193, avg=70.74, stdev=21.11 00:20:13.564 lat (msec): min=16, max=193, avg=70.76, stdev=21.10 00:20:13.564 clat percentiles (msec): 00:20:13.564 | 1.00th=[ 29], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 50], 00:20:13.564 | 30.00th=[ 57], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 74], 00:20:13.564 | 70.00th=[ 81], 80.00th=[ 88], 90.00th=[ 99], 95.00th=[ 106], 00:20:13.564 | 99.00th=[ 114], 99.50th=[ 161], 99.90th=[ 161], 99.95th=[ 194], 00:20:13.564 | 99.99th=[ 194] 00:20:13.564 bw ( KiB/s): min= 528, max= 1128, per=4.39%, avg=898.65, stdev=134.30, samples=20 00:20:13.564 iops : min= 132, max= 282, avg=224.60, stdev=33.55, samples=20 00:20:13.564 lat (msec) : 20=0.13%, 50=20.56%, 100=70.73%, 250=8.58% 00:20:13.564 cpu : usr=43.28%, sys=2.66%, ctx=1631, majf=0, minf=9 00:20:13.564 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.3%, 16=16.0%, 32=0.0%, >=64=0.0% 00:20:13.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.564 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.564 issued rwts: total=2262,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.564 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:13.564 filename2: (groupid=0, jobs=1): err= 0: pid=82974: Thu Jul 25 10:58:41 2024 00:20:13.564 read: IOPS=216, BW=865KiB/s (885kB/s)(8664KiB/10020msec) 00:20:13.564 slat (usec): min=7, max=8044, avg=40.45, stdev=438.49 00:20:13.564 clat (msec): min=26, max=147, avg=73.78, stdev=19.68 00:20:13.564 lat (msec): min=26, max=147, avg=73.82, stdev=19.67 00:20:13.564 clat percentiles (msec): 00:20:13.564 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 57], 00:20:13.564 | 30.00th=[ 63], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 75], 00:20:13.564 | 70.00th=[ 85], 80.00th=[ 95], 90.00th=[ 101], 95.00th=[ 107], 00:20:13.564 | 99.00th=[ 112], 99.50th=[ 121], 99.90th=[ 140], 99.95th=[ 148], 00:20:13.564 | 99.99th=[ 148] 00:20:13.565 bw ( KiB/s): min= 624, max= 1080, per=4.21%, avg=860.00, stdev=140.44, samples=20 00:20:13.565 iops : min= 156, max= 270, avg=215.00, stdev=35.11, samples=20 00:20:13.565 lat (msec) : 50=17.04%, 100=73.13%, 250=9.83% 00:20:13.565 cpu : usr=34.31%, sys=1.78%, ctx=984, majf=0, minf=9 00:20:13.565 IO depths : 1=0.1%, 2=1.8%, 4=7.0%, 8=76.1%, 16=15.1%, 32=0.0%, >=64=0.0% 00:20:13.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.565 complete : 0=0.0%, 4=89.0%, 8=9.5%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.565 issued rwts: total=2166,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.565 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:13.565 filename2: (groupid=0, jobs=1): err= 0: pid=82975: Thu Jul 25 10:58:41 2024 00:20:13.565 read: IOPS=217, BW=869KiB/s (890kB/s)(8720KiB/10037msec) 00:20:13.565 slat (usec): min=7, max=8024, avg=19.25, stdev=191.97 00:20:13.565 clat (msec): min=23, max=169, avg=73.54, stdev=20.45 00:20:13.565 lat (msec): min=23, max=169, avg=73.56, stdev=20.45 00:20:13.565 clat percentiles (msec): 00:20:13.565 | 1.00th=[ 36], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 53], 00:20:13.565 | 30.00th=[ 64], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 77], 00:20:13.565 | 70.00th=[ 84], 80.00th=[ 91], 90.00th=[ 102], 95.00th=[ 108], 00:20:13.565 | 99.00th=[ 131], 99.50th=[ 134], 99.90th=[ 134], 99.95th=[ 169], 00:20:13.565 | 99.99th=[ 169] 00:20:13.565 bw ( KiB/s): min= 600, max= 1024, per=4.23%, avg=865.60, stdev=120.73, samples=20 00:20:13.565 iops : min= 150, max= 256, avg=216.40, stdev=30.18, samples=20 00:20:13.565 lat (msec) : 50=17.57%, 100=71.61%, 250=10.83% 00:20:13.565 cpu : usr=38.34%, sys=2.31%, ctx=1459, majf=0, minf=9 00:20:13.565 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.5%, 16=16.6%, 32=0.0%, >=64=0.0% 00:20:13.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.565 complete : 0=0.0%, 4=87.6%, 8=12.2%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.565 issued rwts: total=2180,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.565 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:13.565 filename2: (groupid=0, jobs=1): err= 0: pid=82978: Thu Jul 25 10:58:41 2024 00:20:13.565 read: IOPS=218, BW=873KiB/s (894kB/s)(8732KiB/10004msec) 00:20:13.565 slat (usec): min=4, max=8034, avg=25.92, stdev=271.29 00:20:13.565 clat (msec): min=3, max=213, avg=73.22, stdev=22.65 00:20:13.565 lat (msec): min=3, max=213, avg=73.24, stdev=22.65 00:20:13.565 clat percentiles (msec): 00:20:13.565 | 1.00th=[ 24], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 51], 00:20:13.565 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 79], 00:20:13.565 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 103], 95.00th=[ 107], 00:20:13.565 | 99.00th=[ 120], 99.50th=[ 171], 99.90th=[ 171], 99.95th=[ 213], 00:20:13.565 | 99.99th=[ 213] 00:20:13.565 bw ( KiB/s): min= 510, max= 1024, per=4.18%, avg=855.47, stdev=164.71, samples=19 00:20:13.565 iops : min= 127, max= 256, avg=213.84, stdev=41.24, samples=19 00:20:13.565 lat (msec) : 4=0.18%, 10=0.27%, 20=0.14%, 50=19.10%, 100=69.81% 00:20:13.565 lat (msec) : 250=10.49% 00:20:13.565 cpu : usr=35.45%, sys=1.99%, ctx=1060, majf=0, minf=9 00:20:13.565 IO depths : 1=0.1%, 2=1.6%, 4=6.5%, 8=76.9%, 16=14.9%, 32=0.0%, >=64=0.0% 00:20:13.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.565 complete : 0=0.0%, 4=88.7%, 8=9.9%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.565 issued rwts: total=2183,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.565 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:13.565 filename2: (groupid=0, jobs=1): err= 0: pid=82979: Thu Jul 25 10:58:41 2024 00:20:13.565 read: IOPS=206, BW=826KiB/s (846kB/s)(8312KiB/10059msec) 00:20:13.565 slat (usec): min=3, max=4023, avg=15.92, stdev=124.42 00:20:13.565 clat (usec): min=1436, max=154995, avg=77272.60, stdev=30712.46 00:20:13.565 lat (usec): min=1443, max=155010, avg=77288.52, stdev=30716.09 00:20:13.565 clat percentiles (usec): 00:20:13.565 | 1.00th=[ 1500], 5.00th=[ 2180], 10.00th=[ 6390], 20.00th=[ 64750], 00:20:13.565 | 30.00th=[ 70779], 40.00th=[ 72877], 50.00th=[ 80217], 60.00th=[ 87557], 00:20:13.565 | 70.00th=[ 93848], 80.00th=[ 99091], 90.00th=[105382], 95.00th=[112722], 00:20:13.565 | 99.00th=[145753], 99.50th=[154141], 99.90th=[154141], 99.95th=[154141], 00:20:13.565 | 99.99th=[154141] 00:20:13.565 bw ( KiB/s): min= 624, max= 2304, per=4.03%, avg=824.80, stdev=364.36, samples=20 00:20:13.565 iops : min= 156, max= 576, avg=206.20, stdev=91.09, samples=20 00:20:13.565 lat (msec) : 2=3.85%, 4=3.85%, 10=2.31%, 50=0.67%, 100=71.37% 00:20:13.565 lat (msec) : 250=17.95% 00:20:13.565 cpu : usr=45.31%, sys=2.92%, ctx=1515, majf=0, minf=0 00:20:13.565 IO depths : 1=0.4%, 2=6.6%, 4=24.8%, 8=56.1%, 16=12.1%, 32=0.0%, >=64=0.0% 00:20:13.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.565 complete : 0=0.0%, 4=94.4%, 8=0.1%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.565 issued rwts: total=2078,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.565 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:13.565 filename2: (groupid=0, jobs=1): err= 0: pid=82980: Thu Jul 25 10:58:41 2024 00:20:13.565 read: IOPS=215, BW=860KiB/s (881kB/s)(8636KiB/10039msec) 00:20:13.565 slat (usec): min=7, max=8032, avg=27.50, stdev=333.82 00:20:13.565 clat (msec): min=33, max=155, avg=74.23, stdev=20.28 00:20:13.565 lat (msec): min=33, max=156, avg=74.26, stdev=20.29 00:20:13.565 clat percentiles (msec): 00:20:13.565 | 1.00th=[ 38], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 57], 00:20:13.565 | 30.00th=[ 63], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 75], 00:20:13.565 | 70.00th=[ 85], 80.00th=[ 94], 90.00th=[ 103], 95.00th=[ 108], 00:20:13.565 | 99.00th=[ 121], 99.50th=[ 153], 99.90th=[ 153], 99.95th=[ 157], 00:20:13.565 | 99.99th=[ 157] 00:20:13.565 bw ( KiB/s): min= 640, max= 1024, per=4.19%, avg=857.20, stdev=112.07, samples=20 00:20:13.565 iops : min= 160, max= 256, avg=214.30, stdev=28.02, samples=20 00:20:13.565 lat (msec) : 50=16.21%, 100=70.59%, 250=13.20% 00:20:13.565 cpu : usr=31.40%, sys=1.83%, ctx=1043, majf=0, minf=9 00:20:13.565 IO depths : 1=0.1%, 2=0.9%, 4=3.8%, 8=79.2%, 16=16.0%, 32=0.0%, >=64=0.0% 00:20:13.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.565 complete : 0=0.0%, 4=88.4%, 8=10.7%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.565 issued rwts: total=2159,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.565 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:13.565 filename2: (groupid=0, jobs=1): err= 0: pid=82981: Thu Jul 25 10:58:41 2024 00:20:13.565 read: IOPS=217, BW=870KiB/s (891kB/s)(8708KiB/10007msec) 00:20:13.565 slat (usec): min=7, max=8029, avg=23.78, stdev=257.62 00:20:13.565 clat (msec): min=7, max=216, avg=73.42, stdev=23.46 00:20:13.565 lat (msec): min=7, max=216, avg=73.44, stdev=23.46 00:20:13.565 clat percentiles (msec): 00:20:13.565 | 1.00th=[ 27], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 51], 00:20:13.565 | 30.00th=[ 59], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 77], 00:20:13.565 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 102], 95.00th=[ 108], 00:20:13.565 | 99.00th=[ 132], 99.50th=[ 182], 99.90th=[ 182], 99.95th=[ 218], 00:20:13.565 | 99.99th=[ 218] 00:20:13.565 bw ( KiB/s): min= 492, max= 1024, per=4.17%, avg=852.84, stdev=159.55, samples=19 00:20:13.565 iops : min= 123, max= 256, avg=213.21, stdev=39.89, samples=19 00:20:13.565 lat (msec) : 10=0.32%, 20=0.28%, 50=19.25%, 100=68.95%, 250=11.21% 00:20:13.565 cpu : usr=38.33%, sys=2.17%, ctx=1191, majf=0, minf=9 00:20:13.565 IO depths : 1=0.1%, 2=1.7%, 4=6.6%, 8=76.8%, 16=14.9%, 32=0.0%, >=64=0.0% 00:20:13.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.565 complete : 0=0.0%, 4=88.7%, 8=9.9%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.565 issued rwts: total=2177,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.565 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:13.565 filename2: (groupid=0, jobs=1): err= 0: pid=82982: Thu Jul 25 10:58:41 2024 00:20:13.565 read: IOPS=214, BW=858KiB/s (879kB/s)(8600KiB/10024msec) 00:20:13.565 slat (usec): min=4, max=10034, avg=23.91, stdev=289.80 00:20:13.565 clat (msec): min=24, max=154, avg=74.42, stdev=21.40 00:20:13.565 lat (msec): min=24, max=154, avg=74.44, stdev=21.39 00:20:13.565 clat percentiles (msec): 00:20:13.565 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 54], 00:20:13.565 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 78], 00:20:13.565 | 70.00th=[ 85], 80.00th=[ 94], 90.00th=[ 105], 95.00th=[ 112], 00:20:13.565 | 99.00th=[ 127], 99.50th=[ 146], 99.90th=[ 153], 99.95th=[ 155], 00:20:13.565 | 99.99th=[ 155] 00:20:13.565 bw ( KiB/s): min= 624, max= 1024, per=4.18%, avg=855.25, stdev=131.49, samples=20 00:20:13.565 iops : min= 156, max= 256, avg=213.80, stdev=32.88, samples=20 00:20:13.565 lat (msec) : 50=18.37%, 100=68.09%, 250=13.53% 00:20:13.565 cpu : usr=31.34%, sys=1.85%, ctx=1049, majf=0, minf=9 00:20:13.565 IO depths : 1=0.1%, 2=1.1%, 4=4.4%, 8=78.8%, 16=15.6%, 32=0.0%, >=64=0.0% 00:20:13.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.565 complete : 0=0.0%, 4=88.3%, 8=10.7%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.565 issued rwts: total=2150,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.565 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:13.565 filename2: (groupid=0, jobs=1): err= 0: pid=82983: Thu Jul 25 10:58:41 2024 00:20:13.565 read: IOPS=205, BW=824KiB/s (843kB/s)(8268KiB/10040msec) 00:20:13.565 slat (usec): min=7, max=8024, avg=25.20, stdev=305.09 00:20:13.565 clat (msec): min=26, max=153, avg=77.57, stdev=21.56 00:20:13.565 lat (msec): min=26, max=153, avg=77.59, stdev=21.57 00:20:13.565 clat percentiles (msec): 00:20:13.565 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 61], 00:20:13.565 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 84], 00:20:13.565 | 70.00th=[ 87], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 109], 00:20:13.565 | 99.00th=[ 133], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 155], 00:20:13.565 | 99.99th=[ 155] 00:20:13.566 bw ( KiB/s): min= 512, max= 1000, per=4.01%, avg=820.40, stdev=146.24, samples=20 00:20:13.566 iops : min= 128, max= 250, avg=205.10, stdev=36.56, samples=20 00:20:13.566 lat (msec) : 50=12.58%, 100=73.88%, 250=13.55% 00:20:13.566 cpu : usr=31.30%, sys=1.91%, ctx=857, majf=0, minf=9 00:20:13.566 IO depths : 1=0.1%, 2=2.4%, 4=9.2%, 8=73.2%, 16=15.1%, 32=0.0%, >=64=0.0% 00:20:13.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.566 complete : 0=0.0%, 4=90.0%, 8=7.9%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.566 issued rwts: total=2067,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.566 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:13.566 00:20:13.566 Run status group 0 (all jobs): 00:20:13.566 READ: bw=20.0MiB/s (20.9MB/s), 783KiB/s-918KiB/s (802kB/s-940kB/s), io=201MiB (210MB), run=10001-10059msec 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:13.566 bdev_null0 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:13.566 [2024-07-25 10:58:41.528731] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:13.566 bdev_null1 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:13.566 10:58:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:13.566 { 00:20:13.566 "params": { 00:20:13.566 "name": "Nvme$subsystem", 00:20:13.566 "trtype": "$TEST_TRANSPORT", 00:20:13.566 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:13.566 "adrfam": "ipv4", 00:20:13.566 "trsvcid": "$NVMF_PORT", 00:20:13.566 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:13.566 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:13.566 "hdgst": ${hdgst:-false}, 00:20:13.566 "ddgst": ${ddgst:-false} 00:20:13.566 }, 00:20:13.566 "method": "bdev_nvme_attach_controller" 00:20:13.566 } 00:20:13.566 EOF 00:20:13.566 )") 00:20:13.567 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:13.567 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:13.567 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:13.567 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:20:13.567 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:13.567 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:13.567 10:58:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:13.567 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:13.567 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:13.567 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:13.567 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:13.567 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:20:13.567 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:13.567 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:13.567 10:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:13.567 10:58:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:13.567 10:58:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:13.567 { 00:20:13.567 "params": { 00:20:13.567 "name": "Nvme$subsystem", 00:20:13.567 "trtype": "$TEST_TRANSPORT", 00:20:13.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:13.567 "adrfam": "ipv4", 00:20:13.567 "trsvcid": "$NVMF_PORT", 00:20:13.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:13.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:13.567 "hdgst": ${hdgst:-false}, 00:20:13.567 "ddgst": ${ddgst:-false} 00:20:13.567 }, 00:20:13.567 "method": "bdev_nvme_attach_controller" 00:20:13.567 } 00:20:13.567 EOF 00:20:13.567 )") 00:20:13.567 10:58:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:13.567 10:58:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:20:13.567 10:58:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:20:13.567 10:58:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:13.567 "params": { 00:20:13.567 "name": "Nvme0", 00:20:13.567 "trtype": "tcp", 00:20:13.567 "traddr": "10.0.0.2", 00:20:13.567 "adrfam": "ipv4", 00:20:13.567 "trsvcid": "4420", 00:20:13.567 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:13.567 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:13.567 "hdgst": false, 00:20:13.567 "ddgst": false 00:20:13.567 }, 00:20:13.567 "method": "bdev_nvme_attach_controller" 00:20:13.567 },{ 00:20:13.567 "params": { 00:20:13.567 "name": "Nvme1", 00:20:13.567 "trtype": "tcp", 00:20:13.567 "traddr": "10.0.0.2", 00:20:13.567 "adrfam": "ipv4", 00:20:13.567 "trsvcid": "4420", 00:20:13.567 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.567 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:13.567 "hdgst": false, 00:20:13.567 "ddgst": false 00:20:13.567 }, 00:20:13.567 "method": "bdev_nvme_attach_controller" 00:20:13.567 }' 00:20:13.567 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:13.567 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:13.567 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:13.567 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:13.567 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:13.567 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:13.567 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:13.567 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:13.567 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:13.567 10:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:13.567 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:13.567 ... 00:20:13.567 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:13.567 ... 00:20:13.567 fio-3.35 00:20:13.567 Starting 4 threads 00:20:17.774 00:20:17.774 filename0: (groupid=0, jobs=1): err= 0: pid=83117: Thu Jul 25 10:58:47 2024 00:20:17.774 read: IOPS=2020, BW=15.8MiB/s (16.6MB/s)(78.9MiB/5001msec) 00:20:17.774 slat (nsec): min=3463, max=54379, avg=15120.62, stdev=4248.36 00:20:17.774 clat (usec): min=1598, max=6244, avg=3900.01, stdev=239.95 00:20:17.774 lat (usec): min=1611, max=6259, avg=3915.13, stdev=239.90 00:20:17.774 clat percentiles (usec): 00:20:17.774 | 1.00th=[ 3458], 5.00th=[ 3621], 10.00th=[ 3720], 20.00th=[ 3785], 00:20:17.774 | 30.00th=[ 3818], 40.00th=[ 3851], 50.00th=[ 3851], 60.00th=[ 3884], 00:20:17.774 | 70.00th=[ 3916], 80.00th=[ 4015], 90.00th=[ 4146], 95.00th=[ 4228], 00:20:17.774 | 99.00th=[ 4686], 99.50th=[ 5145], 99.90th=[ 5276], 99.95th=[ 5342], 00:20:17.774 | 99.99th=[ 5342] 00:20:17.774 bw ( KiB/s): min=15232, max=16768, per=24.47%, avg=16129.78, stdev=528.75, samples=9 00:20:17.774 iops : min= 1904, max= 2096, avg=2016.22, stdev=66.09, samples=9 00:20:17.774 lat (msec) : 2=0.03%, 4=79.70%, 10=20.27% 00:20:17.774 cpu : usr=91.88%, sys=7.42%, ctx=4, majf=0, minf=10 00:20:17.774 IO depths : 1=0.1%, 2=24.8%, 4=50.1%, 8=25.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:17.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.774 complete : 0=0.0%, 4=90.1%, 8=9.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.774 issued rwts: total=10105,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:17.774 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:17.774 filename0: (groupid=0, jobs=1): err= 0: pid=83118: Thu Jul 25 10:58:47 2024 00:20:17.774 read: IOPS=2154, BW=16.8MiB/s (17.7MB/s)(84.2MiB/5001msec) 00:20:17.774 slat (nsec): min=6778, max=64174, avg=13034.86, stdev=4512.14 00:20:17.774 clat (usec): min=636, max=6923, avg=3666.74, stdev=611.62 00:20:17.774 lat (usec): min=645, max=6938, avg=3679.78, stdev=612.40 00:20:17.774 clat percentiles (usec): 00:20:17.775 | 1.00th=[ 1385], 5.00th=[ 1844], 10.00th=[ 2966], 20.00th=[ 3687], 00:20:17.775 | 30.00th=[ 3785], 40.00th=[ 3818], 50.00th=[ 3851], 60.00th=[ 3851], 00:20:17.775 | 70.00th=[ 3884], 80.00th=[ 3916], 90.00th=[ 4015], 95.00th=[ 4047], 00:20:17.775 | 99.00th=[ 4359], 99.50th=[ 4424], 99.90th=[ 4686], 99.95th=[ 4817], 00:20:17.775 | 99.99th=[ 6915] 00:20:17.775 bw ( KiB/s): min=16128, max=19984, per=26.29%, avg=17331.33, stdev=1347.89, samples=9 00:20:17.775 iops : min= 2016, max= 2498, avg=2166.33, stdev=168.53, samples=9 00:20:17.775 lat (usec) : 750=0.07%, 1000=0.03% 00:20:17.775 lat (msec) : 2=5.14%, 4=83.38%, 10=11.38% 00:20:17.775 cpu : usr=92.10%, sys=7.10%, ctx=11, majf=0, minf=0 00:20:17.775 IO depths : 1=0.1%, 2=19.5%, 4=53.2%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:17.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.775 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.775 issued rwts: total=10775,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:17.775 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:17.775 filename1: (groupid=0, jobs=1): err= 0: pid=83119: Thu Jul 25 10:58:47 2024 00:20:17.775 read: IOPS=2048, BW=16.0MiB/s (16.8MB/s)(80.1MiB/5003msec) 00:20:17.775 slat (nsec): min=7376, max=65071, avg=15090.47, stdev=4280.72 00:20:17.775 clat (usec): min=1170, max=6980, avg=3846.30, stdev=310.19 00:20:17.775 lat (usec): min=1179, max=6995, avg=3861.39, stdev=310.26 00:20:17.775 clat percentiles (usec): 00:20:17.775 | 1.00th=[ 2180], 5.00th=[ 3556], 10.00th=[ 3654], 20.00th=[ 3785], 00:20:17.775 | 30.00th=[ 3818], 40.00th=[ 3818], 50.00th=[ 3851], 60.00th=[ 3884], 00:20:17.775 | 70.00th=[ 3916], 80.00th=[ 3982], 90.00th=[ 4146], 95.00th=[ 4178], 00:20:17.775 | 99.00th=[ 4490], 99.50th=[ 4555], 99.90th=[ 5145], 99.95th=[ 5211], 00:20:17.775 | 99.99th=[ 6259] 00:20:17.775 bw ( KiB/s): min=15488, max=17408, per=24.86%, avg=16389.44, stdev=567.22, samples=9 00:20:17.775 iops : min= 1936, max= 2176, avg=2048.67, stdev=70.89, samples=9 00:20:17.775 lat (msec) : 2=0.38%, 4=81.03%, 10=18.59% 00:20:17.775 cpu : usr=91.90%, sys=7.32%, ctx=7, majf=0, minf=0 00:20:17.775 IO depths : 1=0.1%, 2=23.7%, 4=50.8%, 8=25.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:17.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.775 complete : 0=0.0%, 4=90.5%, 8=9.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.775 issued rwts: total=10249,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:17.775 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:17.775 filename1: (groupid=0, jobs=1): err= 0: pid=83120: Thu Jul 25 10:58:47 2024 00:20:17.775 read: IOPS=2019, BW=15.8MiB/s (16.5MB/s)(78.9MiB/5001msec) 00:20:17.775 slat (nsec): min=6710, max=50870, avg=12139.50, stdev=4138.68 00:20:17.775 clat (usec): min=456, max=6696, avg=3912.70, stdev=279.25 00:20:17.775 lat (usec): min=464, max=6720, avg=3924.84, stdev=280.10 00:20:17.775 clat percentiles (usec): 00:20:17.775 | 1.00th=[ 3490], 5.00th=[ 3621], 10.00th=[ 3752], 20.00th=[ 3785], 00:20:17.775 | 30.00th=[ 3818], 40.00th=[ 3851], 50.00th=[ 3884], 60.00th=[ 3884], 00:20:17.775 | 70.00th=[ 3949], 80.00th=[ 4015], 90.00th=[ 4146], 95.00th=[ 4228], 00:20:17.775 | 99.00th=[ 5014], 99.50th=[ 5276], 99.90th=[ 6063], 99.95th=[ 6456], 00:20:17.775 | 99.99th=[ 6521] 00:20:17.775 bw ( KiB/s): min=15232, max=16880, per=24.44%, avg=16112.00, stdev=521.72, samples=9 00:20:17.775 iops : min= 1904, max= 2110, avg=2014.00, stdev=65.22, samples=9 00:20:17.775 lat (usec) : 500=0.03%, 1000=0.02% 00:20:17.775 lat (msec) : 2=0.15%, 4=78.40%, 10=21.40% 00:20:17.775 cpu : usr=91.40%, sys=7.84%, ctx=57, majf=0, minf=9 00:20:17.775 IO depths : 1=0.1%, 2=24.9%, 4=50.1%, 8=25.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:17.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.775 complete : 0=0.0%, 4=90.1%, 8=9.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.775 issued rwts: total=10098,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:17.775 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:17.775 00:20:17.775 Run status group 0 (all jobs): 00:20:17.775 READ: bw=64.4MiB/s (67.5MB/s), 15.8MiB/s-16.8MiB/s (16.5MB/s-17.7MB/s), io=322MiB (338MB), run=5001-5003msec 00:20:18.035 10:58:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:20:18.035 10:58:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:18.035 10:58:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:18.035 10:58:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:18.035 10:58:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:18.035 10:58:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:18.035 10:58:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.035 10:58:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:18.035 10:58:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.035 10:58:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:18.035 10:58:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.035 10:58:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:18.035 10:58:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.035 10:58:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:18.035 10:58:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:18.035 10:58:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:18.035 10:58:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:18.035 10:58:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.035 10:58:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:18.035 10:58:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.035 10:58:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:18.035 10:58:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.035 10:58:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:18.035 ************************************ 00:20:18.035 END TEST fio_dif_rand_params 00:20:18.035 ************************************ 00:20:18.035 10:58:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.035 00:20:18.035 real 0m23.520s 00:20:18.035 user 2m3.603s 00:20:18.035 sys 0m8.756s 00:20:18.035 10:58:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:18.035 10:58:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:18.035 10:58:47 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:20:18.035 10:58:47 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:18.035 10:58:47 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:18.035 10:58:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:18.035 ************************************ 00:20:18.035 START TEST fio_dif_digest 00:20:18.035 ************************************ 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:18.035 bdev_null0 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:18.035 [2024-07-25 10:58:47.699721] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:18.035 { 00:20:18.035 "params": { 00:20:18.035 "name": "Nvme$subsystem", 00:20:18.035 "trtype": "$TEST_TRANSPORT", 00:20:18.035 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.035 "adrfam": "ipv4", 00:20:18.035 "trsvcid": "$NVMF_PORT", 00:20:18.035 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.035 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.035 "hdgst": ${hdgst:-false}, 00:20:18.035 "ddgst": ${ddgst:-false} 00:20:18.035 }, 00:20:18.035 "method": "bdev_nvme_attach_controller" 00:20:18.035 } 00:20:18.035 EOF 00:20:18.035 )") 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:18.035 "params": { 00:20:18.035 "name": "Nvme0", 00:20:18.035 "trtype": "tcp", 00:20:18.035 "traddr": "10.0.0.2", 00:20:18.035 "adrfam": "ipv4", 00:20:18.035 "trsvcid": "4420", 00:20:18.035 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:18.035 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:18.035 "hdgst": true, 00:20:18.035 "ddgst": true 00:20:18.035 }, 00:20:18.035 "method": "bdev_nvme_attach_controller" 00:20:18.035 }' 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:18.035 10:58:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:18.036 10:58:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:18.036 10:58:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:18.036 10:58:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:18.294 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:18.294 ... 00:20:18.294 fio-3.35 00:20:18.294 Starting 3 threads 00:20:30.501 00:20:30.501 filename0: (groupid=0, jobs=1): err= 0: pid=83225: Thu Jul 25 10:58:58 2024 00:20:30.501 read: IOPS=227, BW=28.4MiB/s (29.8MB/s)(285MiB/10006msec) 00:20:30.501 slat (nsec): min=6750, max=61513, avg=10239.28, stdev=3924.57 00:20:30.501 clat (usec): min=9901, max=16506, avg=13156.81, stdev=401.85 00:20:30.501 lat (usec): min=9908, max=16531, avg=13167.05, stdev=402.14 00:20:30.501 clat percentiles (usec): 00:20:30.501 | 1.00th=[11863], 5.00th=[12387], 10.00th=[12649], 20.00th=[13173], 00:20:30.501 | 30.00th=[13173], 40.00th=[13173], 50.00th=[13173], 60.00th=[13173], 00:20:30.501 | 70.00th=[13304], 80.00th=[13435], 90.00th=[13566], 95.00th=[13698], 00:20:30.501 | 99.00th=[13829], 99.50th=[13960], 99.90th=[16450], 99.95th=[16450], 00:20:30.501 | 99.99th=[16450] 00:20:30.501 bw ( KiB/s): min=27648, max=29952, per=33.36%, avg=29146.63, stdev=541.87, samples=19 00:20:30.501 iops : min= 216, max= 234, avg=227.68, stdev= 4.23, samples=19 00:20:30.501 lat (msec) : 10=0.13%, 20=99.87% 00:20:30.501 cpu : usr=91.54%, sys=7.94%, ctx=34, majf=0, minf=9 00:20:30.501 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:30.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.501 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.501 issued rwts: total=2277,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.501 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:30.501 filename0: (groupid=0, jobs=1): err= 0: pid=83226: Thu Jul 25 10:58:58 2024 00:20:30.501 read: IOPS=227, BW=28.4MiB/s (29.8MB/s)(285MiB/10007msec) 00:20:30.501 slat (usec): min=6, max=135, avg=10.22, stdev= 4.44 00:20:30.501 clat (usec): min=11679, max=15798, avg=13158.62, stdev=380.85 00:20:30.501 lat (usec): min=11686, max=15821, avg=13168.83, stdev=381.07 00:20:30.501 clat percentiles (usec): 00:20:30.501 | 1.00th=[11863], 5.00th=[12387], 10.00th=[12649], 20.00th=[13173], 00:20:30.501 | 30.00th=[13173], 40.00th=[13173], 50.00th=[13173], 60.00th=[13173], 00:20:30.501 | 70.00th=[13304], 80.00th=[13435], 90.00th=[13566], 95.00th=[13698], 00:20:30.501 | 99.00th=[13829], 99.50th=[13960], 99.90th=[15795], 99.95th=[15795], 00:20:30.501 | 99.99th=[15795] 00:20:30.501 bw ( KiB/s): min=27703, max=30012, per=33.33%, avg=29118.85, stdev=607.93, samples=20 00:20:30.501 iops : min= 216, max= 234, avg=227.40, stdev= 4.73, samples=20 00:20:30.501 lat (msec) : 20=100.00% 00:20:30.501 cpu : usr=91.68%, sys=7.81%, ctx=12, majf=0, minf=0 00:20:30.501 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:30.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.501 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.501 issued rwts: total=2277,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.501 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:30.501 filename0: (groupid=0, jobs=1): err= 0: pid=83227: Thu Jul 25 10:58:58 2024 00:20:30.501 read: IOPS=227, BW=28.4MiB/s (29.8MB/s)(285MiB/10008msec) 00:20:30.501 slat (nsec): min=6926, max=60515, avg=10404.13, stdev=3918.73 00:20:30.501 clat (usec): min=11712, max=15205, avg=13159.44, stdev=372.42 00:20:30.501 lat (usec): min=11720, max=15217, avg=13169.84, stdev=372.95 00:20:30.501 clat percentiles (usec): 00:20:30.501 | 1.00th=[11863], 5.00th=[12387], 10.00th=[12649], 20.00th=[13173], 00:20:30.501 | 30.00th=[13173], 40.00th=[13173], 50.00th=[13173], 60.00th=[13173], 00:20:30.501 | 70.00th=[13304], 80.00th=[13435], 90.00th=[13566], 95.00th=[13698], 00:20:30.501 | 99.00th=[13829], 99.50th=[13960], 99.90th=[15139], 99.95th=[15139], 00:20:30.501 | 99.99th=[15270] 00:20:30.501 bw ( KiB/s): min=28416, max=30012, per=33.33%, avg=29116.10, stdev=437.79, samples=20 00:20:30.501 iops : min= 222, max= 234, avg=227.40, stdev= 3.32, samples=20 00:20:30.501 lat (msec) : 20=100.00% 00:20:30.501 cpu : usr=91.73%, sys=7.76%, ctx=20, majf=0, minf=0 00:20:30.501 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:30.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.501 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.501 issued rwts: total=2277,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.501 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:30.501 00:20:30.501 Run status group 0 (all jobs): 00:20:30.501 READ: bw=85.3MiB/s (89.5MB/s), 28.4MiB/s-28.4MiB/s (29.8MB/s-29.8MB/s), io=854MiB (895MB), run=10006-10008msec 00:20:30.502 10:58:58 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:20:30.502 10:58:58 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:20:30.502 10:58:58 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:20:30.502 10:58:58 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:30.502 10:58:58 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:20:30.502 10:58:58 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:30.502 10:58:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.502 10:58:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:30.502 10:58:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.502 10:58:58 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:30.502 10:58:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.502 10:58:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:30.502 ************************************ 00:20:30.502 END TEST fio_dif_digest 00:20:30.502 ************************************ 00:20:30.502 10:58:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.502 00:20:30.502 real 0m10.987s 00:20:30.502 user 0m28.151s 00:20:30.502 sys 0m2.612s 00:20:30.502 10:58:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:30.502 10:58:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:30.502 10:58:58 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:20:30.502 10:58:58 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:20:30.502 10:58:58 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:30.502 10:58:58 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:20:30.502 10:58:58 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:30.502 10:58:58 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:20:30.502 10:58:58 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:30.502 10:58:58 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:30.502 rmmod nvme_tcp 00:20:30.502 rmmod nvme_fabrics 00:20:30.502 rmmod nvme_keyring 00:20:30.502 10:58:58 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:30.502 10:58:58 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:20:30.502 10:58:58 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:20:30.502 10:58:58 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 82475 ']' 00:20:30.502 10:58:58 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 82475 00:20:30.502 10:58:58 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 82475 ']' 00:20:30.502 10:58:58 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 82475 00:20:30.502 10:58:58 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:20:30.502 10:58:58 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:30.502 10:58:58 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82475 00:20:30.502 killing process with pid 82475 00:20:30.502 10:58:58 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:30.502 10:58:58 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:30.502 10:58:58 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82475' 00:20:30.502 10:58:58 nvmf_dif -- common/autotest_common.sh@969 -- # kill 82475 00:20:30.502 10:58:58 nvmf_dif -- common/autotest_common.sh@974 -- # wait 82475 00:20:30.502 10:58:59 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:20:30.502 10:58:59 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:30.502 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:30.502 Waiting for block devices as requested 00:20:30.502 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:30.502 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:30.502 10:58:59 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:30.502 10:58:59 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:30.502 10:58:59 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:30.502 10:58:59 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:30.502 10:58:59 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:30.502 10:58:59 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:30.502 10:58:59 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.502 10:58:59 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:30.502 00:20:30.502 real 0m59.757s 00:20:30.502 user 3m47.199s 00:20:30.502 sys 0m20.394s 00:20:30.502 10:58:59 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:30.502 ************************************ 00:20:30.502 END TEST nvmf_dif 00:20:30.502 ************************************ 00:20:30.502 10:58:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:30.502 10:58:59 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:30.502 10:58:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:30.502 10:58:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:30.502 10:58:59 -- common/autotest_common.sh@10 -- # set +x 00:20:30.502 ************************************ 00:20:30.502 START TEST nvmf_abort_qd_sizes 00:20:30.502 ************************************ 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:30.502 * Looking for test storage... 00:20:30.502 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=bb4b8bd3-cfb4-4368-bf29-91254747069c 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:30.502 Cannot find device "nvmf_tgt_br" 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:30.502 Cannot find device "nvmf_tgt_br2" 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:30.502 Cannot find device "nvmf_tgt_br" 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:30.502 Cannot find device "nvmf_tgt_br2" 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:30.502 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:30.502 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:30.502 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:30.503 10:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:30.503 10:59:00 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:30.503 10:59:00 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:30.503 10:59:00 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:30.503 10:59:00 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:30.503 10:59:00 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:30.503 10:59:00 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:30.503 10:59:00 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:30.503 10:59:00 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:30.503 10:59:00 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:30.503 10:59:00 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:30.503 10:59:00 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:30.503 10:59:00 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:30.503 10:59:00 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:30.503 10:59:00 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:30.503 10:59:00 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:30.503 10:59:00 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:30.503 10:59:00 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:30.503 10:59:00 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:30.503 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:30.503 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:20:30.503 00:20:30.503 --- 10.0.0.2 ping statistics --- 00:20:30.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.503 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:20:30.503 10:59:00 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:30.503 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:30.503 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:20:30.503 00:20:30.503 --- 10.0.0.3 ping statistics --- 00:20:30.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.503 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:20:30.503 10:59:00 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:30.503 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:30.503 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:20:30.503 00:20:30.503 --- 10.0.0.1 ping statistics --- 00:20:30.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.503 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:20:30.503 10:59:00 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:30.503 10:59:00 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:20:30.503 10:59:00 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:20:30.503 10:59:00 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:31.067 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:31.325 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:31.325 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:31.325 10:59:01 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:31.325 10:59:01 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:31.325 10:59:01 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:31.325 10:59:01 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:31.325 10:59:01 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:31.325 10:59:01 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:31.325 10:59:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:20:31.325 10:59:01 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:31.325 10:59:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:31.325 10:59:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:31.325 10:59:01 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=83814 00:20:31.325 10:59:01 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:20:31.325 10:59:01 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 83814 00:20:31.325 10:59:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 83814 ']' 00:20:31.325 10:59:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:31.325 10:59:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:31.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:31.325 10:59:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:31.325 10:59:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:31.325 10:59:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:31.584 [2024-07-25 10:59:01.101042] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:31.584 [2024-07-25 10:59:01.101144] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:31.584 [2024-07-25 10:59:01.243692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:31.842 [2024-07-25 10:59:01.375317] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:31.842 [2024-07-25 10:59:01.375391] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:31.842 [2024-07-25 10:59:01.375405] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:31.843 [2024-07-25 10:59:01.375416] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:31.843 [2024-07-25 10:59:01.375425] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:31.843 [2024-07-25 10:59:01.375596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:31.843 [2024-07-25 10:59:01.375993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:31.843 [2024-07-25 10:59:01.376707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:31.843 [2024-07-25 10:59:01.376753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.843 [2024-07-25 10:59:01.436447] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:32.409 10:59:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:32.409 10:59:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:20:32.409 10:59:02 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:32.409 10:59:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:32.409 10:59:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:32.409 10:59:02 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:32.409 10:59:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:20:32.409 10:59:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:20:32.409 10:59:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:20:32.409 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:20:32.409 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:20:32.409 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:20:32.409 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:20:32.409 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:20:32.409 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:20:32.409 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:20:32.409 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:20:32.409 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:20:32.409 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:20:32.409 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:20:32.409 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:20:32.409 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:20:32.409 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:20:32.409 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:20:32.409 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:20:32.409 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:20:32.409 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:20:32.668 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:20:32.668 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:20:32.668 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:20:32.668 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:20:32.668 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:32.668 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:20:32.668 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:20:32.668 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:20:32.668 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:32.668 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:20:32.668 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:20:32.669 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:32.669 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:20:32.669 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:20:32.669 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:20:32.669 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:32.669 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:20:32.669 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:20:32.669 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:20:32.669 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:20:32.669 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:20:32.669 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:20:32.669 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:20:32.669 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:20:32.669 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:20:32.669 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:20:32.669 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:20:32.669 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:20:32.669 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:20:32.669 10:59:02 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:20:32.669 10:59:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:20:32.669 10:59:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:20:32.669 10:59:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:20:32.669 10:59:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:32.669 10:59:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:32.669 10:59:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:32.669 ************************************ 00:20:32.669 START TEST spdk_target_abort 00:20:32.669 ************************************ 00:20:32.669 10:59:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:20:32.669 10:59:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:20:32.669 10:59:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:20:32.669 10:59:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.669 10:59:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:32.669 spdk_targetn1 00:20:32.669 10:59:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.669 10:59:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:32.669 10:59:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.669 10:59:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:32.669 [2024-07-25 10:59:02.252439] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:32.669 10:59:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.669 10:59:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:20:32.669 10:59:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.669 10:59:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:32.669 10:59:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.669 10:59:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:20:32.669 10:59:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.669 10:59:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:32.669 10:59:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.669 10:59:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:20:32.669 10:59:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.669 10:59:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:32.669 [2024-07-25 10:59:02.280710] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:32.669 10:59:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.669 10:59:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:20:32.669 10:59:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:32.669 10:59:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:32.669 10:59:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:20:32.669 10:59:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:32.669 10:59:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:20:32.669 10:59:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:32.669 10:59:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:32.669 10:59:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:32.669 10:59:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:32.669 10:59:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:32.669 10:59:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:32.669 10:59:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:32.669 10:59:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:32.669 10:59:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:20:32.669 10:59:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:32.669 10:59:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:32.669 10:59:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:32.669 10:59:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:32.669 10:59:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:32.669 10:59:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:35.953 Initializing NVMe Controllers 00:20:35.954 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:20:35.954 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:35.954 Initialization complete. Launching workers. 00:20:35.954 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9493, failed: 0 00:20:35.954 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1066, failed to submit 8427 00:20:35.954 success 783, unsuccess 283, failed 0 00:20:35.954 10:59:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:35.954 10:59:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:39.237 Initializing NVMe Controllers 00:20:39.237 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:20:39.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:39.237 Initialization complete. Launching workers. 00:20:39.237 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8928, failed: 0 00:20:39.237 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1189, failed to submit 7739 00:20:39.237 success 362, unsuccess 827, failed 0 00:20:39.237 10:59:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:39.237 10:59:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:42.524 Initializing NVMe Controllers 00:20:42.524 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:20:42.524 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:42.524 Initialization complete. Launching workers. 00:20:42.524 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 32281, failed: 0 00:20:42.524 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2284, failed to submit 29997 00:20:42.524 success 500, unsuccess 1784, failed 0 00:20:42.524 10:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:20:42.524 10:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.524 10:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:42.525 10:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.525 10:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:20:42.525 10:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.525 10:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:43.092 10:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.092 10:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 83814 00:20:43.092 10:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 83814 ']' 00:20:43.092 10:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 83814 00:20:43.092 10:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:20:43.092 10:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:43.092 10:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83814 00:20:43.092 killing process with pid 83814 00:20:43.092 10:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:43.092 10:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:43.092 10:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83814' 00:20:43.092 10:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 83814 00:20:43.092 10:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 83814 00:20:43.351 ************************************ 00:20:43.351 END TEST spdk_target_abort 00:20:43.351 ************************************ 00:20:43.351 00:20:43.351 real 0m10.790s 00:20:43.351 user 0m43.683s 00:20:43.351 sys 0m2.125s 00:20:43.351 10:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:43.351 10:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:43.351 10:59:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:20:43.351 10:59:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:43.351 10:59:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:43.351 10:59:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:43.351 ************************************ 00:20:43.351 START TEST kernel_target_abort 00:20:43.351 ************************************ 00:20:43.351 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:20:43.351 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:20:43.351 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:20:43.351 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:43.351 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:43.351 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:43.351 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:43.351 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:43.351 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:43.351 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:43.351 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:43.351 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:43.351 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:20:43.351 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:20:43.351 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:20:43.351 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:43.351 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:43.351 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:43.351 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:20:43.351 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:20:43.351 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:20:43.351 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:43.351 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:43.919 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:43.919 Waiting for block devices as requested 00:20:43.919 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:43.919 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:43.919 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:43.919 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:43.919 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:20:43.919 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:20:43.919 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:43.919 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:43.919 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:20:43.919 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:20:43.919 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:43.919 No valid GPT data, bailing 00:20:43.919 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:44.236 No valid GPT data, bailing 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:44.236 No valid GPT data, bailing 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:44.236 No valid GPT data, bailing 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:44.236 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c --hostid=bb4b8bd3-cfb4-4368-bf29-91254747069c -a 10.0.0.1 -t tcp -s 4420 00:20:44.495 00:20:44.495 Discovery Log Number of Records 2, Generation counter 2 00:20:44.495 =====Discovery Log Entry 0====== 00:20:44.495 trtype: tcp 00:20:44.495 adrfam: ipv4 00:20:44.495 subtype: current discovery subsystem 00:20:44.495 treq: not specified, sq flow control disable supported 00:20:44.495 portid: 1 00:20:44.495 trsvcid: 4420 00:20:44.495 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:44.495 traddr: 10.0.0.1 00:20:44.495 eflags: none 00:20:44.495 sectype: none 00:20:44.495 =====Discovery Log Entry 1====== 00:20:44.495 trtype: tcp 00:20:44.495 adrfam: ipv4 00:20:44.495 subtype: nvme subsystem 00:20:44.495 treq: not specified, sq flow control disable supported 00:20:44.495 portid: 1 00:20:44.495 trsvcid: 4420 00:20:44.495 subnqn: nqn.2016-06.io.spdk:testnqn 00:20:44.495 traddr: 10.0.0.1 00:20:44.495 eflags: none 00:20:44.495 sectype: none 00:20:44.495 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:20:44.495 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:44.495 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:44.495 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:20:44.495 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:44.495 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:20:44.495 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:44.495 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:44.495 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:44.495 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:44.495 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:44.495 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:44.495 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:44.495 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:44.495 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:20:44.495 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:44.495 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:20:44.495 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:44.495 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:44.495 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:44.495 10:59:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:47.784 Initializing NVMe Controllers 00:20:47.784 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:47.784 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:47.784 Initialization complete. Launching workers. 00:20:47.784 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30869, failed: 0 00:20:47.784 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30869, failed to submit 0 00:20:47.784 success 0, unsuccess 30869, failed 0 00:20:47.784 10:59:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:47.784 10:59:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:51.070 Initializing NVMe Controllers 00:20:51.070 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:51.070 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:51.070 Initialization complete. Launching workers. 00:20:51.070 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67056, failed: 0 00:20:51.070 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27516, failed to submit 39540 00:20:51.070 success 0, unsuccess 27516, failed 0 00:20:51.070 10:59:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:51.070 10:59:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:54.356 Initializing NVMe Controllers 00:20:54.356 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:54.356 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:54.356 Initialization complete. Launching workers. 00:20:54.356 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 74342, failed: 0 00:20:54.356 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18572, failed to submit 55770 00:20:54.356 success 0, unsuccess 18572, failed 0 00:20:54.356 10:59:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:20:54.356 10:59:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:20:54.356 10:59:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:20:54.356 10:59:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:54.356 10:59:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:54.356 10:59:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:54.356 10:59:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:54.356 10:59:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:20:54.356 10:59:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:20:54.356 10:59:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:54.614 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:56.514 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:56.514 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:56.514 00:20:56.514 real 0m12.819s 00:20:56.514 user 0m5.718s 00:20:56.514 sys 0m4.397s 00:20:56.514 10:59:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:56.514 10:59:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:56.514 ************************************ 00:20:56.514 END TEST kernel_target_abort 00:20:56.514 ************************************ 00:20:56.514 10:59:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:20:56.514 10:59:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:20:56.514 10:59:25 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:56.515 10:59:25 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:20:56.515 10:59:25 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:56.515 10:59:25 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:20:56.515 10:59:25 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:56.515 10:59:25 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:56.515 rmmod nvme_tcp 00:20:56.515 rmmod nvme_fabrics 00:20:56.515 rmmod nvme_keyring 00:20:56.515 10:59:25 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:56.515 10:59:25 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:20:56.515 10:59:25 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:20:56.515 10:59:25 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 83814 ']' 00:20:56.515 10:59:25 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 83814 00:20:56.515 10:59:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 83814 ']' 00:20:56.515 10:59:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 83814 00:20:56.515 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (83814) - No such process 00:20:56.515 Process with pid 83814 is not found 00:20:56.515 10:59:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 83814 is not found' 00:20:56.515 10:59:25 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:20:56.515 10:59:25 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:56.772 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:56.772 Waiting for block devices as requested 00:20:56.772 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:56.772 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:57.031 10:59:26 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:57.031 10:59:26 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:57.031 10:59:26 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:57.031 10:59:26 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:57.031 10:59:26 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.031 10:59:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:57.031 10:59:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:57.031 10:59:26 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:57.031 00:20:57.031 real 0m26.858s 00:20:57.031 user 0m50.609s 00:20:57.031 sys 0m7.876s 00:20:57.031 ************************************ 00:20:57.031 END TEST nvmf_abort_qd_sizes 00:20:57.031 ************************************ 00:20:57.031 10:59:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:57.031 10:59:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:57.031 10:59:26 -- spdk/autotest.sh@299 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:20:57.031 10:59:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:57.031 10:59:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:57.031 10:59:26 -- common/autotest_common.sh@10 -- # set +x 00:20:57.031 ************************************ 00:20:57.031 START TEST keyring_file 00:20:57.031 ************************************ 00:20:57.031 10:59:26 keyring_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:20:57.031 * Looking for test storage... 00:20:57.031 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:20:57.031 10:59:26 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:20:57.031 10:59:26 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:57.031 10:59:26 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:20:57.031 10:59:26 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:57.031 10:59:26 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:57.031 10:59:26 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:57.031 10:59:26 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:57.031 10:59:26 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:57.031 10:59:26 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:57.031 10:59:26 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:57.031 10:59:26 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:57.031 10:59:26 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:57.031 10:59:26 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:57.031 10:59:26 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:20:57.031 10:59:26 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=bb4b8bd3-cfb4-4368-bf29-91254747069c 00:20:57.031 10:59:26 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:57.031 10:59:26 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:57.032 10:59:26 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:57.032 10:59:26 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:57.032 10:59:26 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:57.032 10:59:26 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:57.032 10:59:26 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:57.032 10:59:26 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:57.032 10:59:26 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.032 10:59:26 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.032 10:59:26 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.032 10:59:26 keyring_file -- paths/export.sh@5 -- # export PATH 00:20:57.032 10:59:26 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.032 10:59:26 keyring_file -- nvmf/common.sh@47 -- # : 0 00:20:57.032 10:59:26 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:57.032 10:59:26 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:57.032 10:59:26 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:57.032 10:59:26 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:57.032 10:59:26 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:57.032 10:59:26 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:57.032 10:59:26 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:57.032 10:59:26 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:57.032 10:59:26 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:20:57.032 10:59:26 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:20:57.032 10:59:26 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:20:57.032 10:59:26 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:20:57.032 10:59:26 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:20:57.032 10:59:26 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:20:57.032 10:59:26 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:20:57.032 10:59:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:20:57.032 10:59:26 keyring_file -- keyring/common.sh@17 -- # name=key0 00:20:57.032 10:59:26 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:20:57.032 10:59:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:20:57.032 10:59:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:20:57.032 10:59:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.0FTktTBPZO 00:20:57.032 10:59:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:20:57.032 10:59:26 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:20:57.032 10:59:26 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:20:57.032 10:59:26 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:57.032 10:59:26 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:57.032 10:59:26 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:20:57.032 10:59:26 keyring_file -- nvmf/common.sh@705 -- # python - 00:20:57.032 10:59:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.0FTktTBPZO 00:20:57.032 10:59:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.0FTktTBPZO 00:20:57.032 10:59:26 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.0FTktTBPZO 00:20:57.032 10:59:26 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:20:57.032 10:59:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:20:57.032 10:59:26 keyring_file -- keyring/common.sh@17 -- # name=key1 00:20:57.032 10:59:26 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:20:57.032 10:59:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:20:57.032 10:59:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:20:57.290 10:59:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.VoMbukonet 00:20:57.290 10:59:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:20:57.290 10:59:26 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:20:57.290 10:59:26 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:20:57.290 10:59:26 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:57.290 10:59:26 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:20:57.290 10:59:26 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:20:57.290 10:59:26 keyring_file -- nvmf/common.sh@705 -- # python - 00:20:57.290 10:59:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.VoMbukonet 00:20:57.290 10:59:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.VoMbukonet 00:20:57.290 10:59:26 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.VoMbukonet 00:20:57.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:57.290 10:59:26 keyring_file -- keyring/file.sh@30 -- # tgtpid=84684 00:20:57.290 10:59:26 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:57.290 10:59:26 keyring_file -- keyring/file.sh@32 -- # waitforlisten 84684 00:20:57.290 10:59:26 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 84684 ']' 00:20:57.290 10:59:26 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:57.290 10:59:26 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:57.290 10:59:26 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:57.290 10:59:26 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:57.290 10:59:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:57.290 [2024-07-25 10:59:26.891292] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:57.290 [2024-07-25 10:59:26.891551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84684 ] 00:20:57.548 [2024-07-25 10:59:27.033651] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.548 [2024-07-25 10:59:27.134829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.548 [2024-07-25 10:59:27.191988] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:58.484 10:59:27 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:58.484 10:59:27 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:20:58.484 10:59:27 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:20:58.484 10:59:27 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.484 10:59:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:58.484 [2024-07-25 10:59:27.874510] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:58.484 null0 00:20:58.484 [2024-07-25 10:59:27.906478] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:58.484 [2024-07-25 10:59:27.906693] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:20:58.484 [2024-07-25 10:59:27.914464] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:58.484 10:59:27 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.484 10:59:27 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:20:58.484 10:59:27 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:20:58.484 10:59:27 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:20:58.484 10:59:27 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:58.484 10:59:27 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:58.484 10:59:27 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:58.484 10:59:27 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:58.484 10:59:27 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:20:58.484 10:59:27 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.484 10:59:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:58.484 [2024-07-25 10:59:27.926463] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:20:58.484 request: 00:20:58.484 { 00:20:58.484 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:20:58.484 "secure_channel": false, 00:20:58.484 "listen_address": { 00:20:58.484 "trtype": "tcp", 00:20:58.484 "traddr": "127.0.0.1", 00:20:58.484 "trsvcid": "4420" 00:20:58.484 }, 00:20:58.484 "method": "nvmf_subsystem_add_listener", 00:20:58.484 "req_id": 1 00:20:58.484 } 00:20:58.484 Got JSON-RPC error response 00:20:58.484 response: 00:20:58.484 { 00:20:58.484 "code": -32602, 00:20:58.484 "message": "Invalid parameters" 00:20:58.484 } 00:20:58.484 10:59:27 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:58.484 10:59:27 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:20:58.484 10:59:27 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:58.484 10:59:27 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:58.484 10:59:27 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:58.484 10:59:27 keyring_file -- keyring/file.sh@46 -- # bperfpid=84697 00:20:58.484 10:59:27 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:20:58.484 10:59:27 keyring_file -- keyring/file.sh@48 -- # waitforlisten 84697 /var/tmp/bperf.sock 00:20:58.484 10:59:27 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 84697 ']' 00:20:58.484 10:59:27 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:58.485 10:59:27 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:58.485 10:59:27 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:58.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:58.485 10:59:27 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:58.485 10:59:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:58.485 [2024-07-25 10:59:27.993332] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:58.485 [2024-07-25 10:59:27.993616] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84697 ] 00:20:58.485 [2024-07-25 10:59:28.134210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.743 [2024-07-25 10:59:28.241522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:58.743 [2024-07-25 10:59:28.299329] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:59.311 10:59:28 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:59.311 10:59:28 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:20:59.311 10:59:28 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.0FTktTBPZO 00:20:59.311 10:59:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.0FTktTBPZO 00:20:59.570 10:59:29 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.VoMbukonet 00:20:59.570 10:59:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.VoMbukonet 00:20:59.829 10:59:29 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:20:59.829 10:59:29 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:20:59.829 10:59:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:59.829 10:59:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:59.829 10:59:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:00.087 10:59:29 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.0FTktTBPZO == \/\t\m\p\/\t\m\p\.\0\F\T\k\t\T\B\P\Z\O ]] 00:21:00.088 10:59:29 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:21:00.088 10:59:29 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:21:00.088 10:59:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:00.088 10:59:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:00.088 10:59:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:00.088 10:59:29 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.VoMbukonet == \/\t\m\p\/\t\m\p\.\V\o\M\b\u\k\o\n\e\t ]] 00:21:00.088 10:59:29 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:21:00.347 10:59:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:00.347 10:59:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:00.347 10:59:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:00.347 10:59:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:00.347 10:59:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:00.347 10:59:30 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:21:00.347 10:59:30 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:21:00.347 10:59:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:00.347 10:59:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:00.347 10:59:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:00.347 10:59:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:00.347 10:59:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:00.606 10:59:30 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:21:00.606 10:59:30 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:00.606 10:59:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:00.865 [2024-07-25 10:59:30.487300] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:00.865 nvme0n1 00:21:00.865 10:59:30 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:21:00.865 10:59:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:00.865 10:59:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:00.865 10:59:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:00.865 10:59:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:00.865 10:59:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:01.124 10:59:30 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:21:01.124 10:59:30 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:21:01.124 10:59:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:01.124 10:59:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:01.124 10:59:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:01.124 10:59:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:01.124 10:59:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:01.382 10:59:31 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:21:01.382 10:59:31 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:01.640 Running I/O for 1 seconds... 00:21:02.574 00:21:02.574 Latency(us) 00:21:02.574 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.574 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:21:02.574 nvme0n1 : 1.01 11454.22 44.74 0.00 0.00 11137.40 3932.16 16324.42 00:21:02.574 =================================================================================================================== 00:21:02.574 Total : 11454.22 44.74 0.00 0.00 11137.40 3932.16 16324.42 00:21:02.574 0 00:21:02.574 10:59:32 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:02.574 10:59:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:02.833 10:59:32 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:21:02.833 10:59:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:02.833 10:59:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:02.833 10:59:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:02.833 10:59:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:02.833 10:59:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:03.092 10:59:32 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:21:03.092 10:59:32 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:21:03.092 10:59:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:03.092 10:59:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:03.092 10:59:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:03.092 10:59:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:03.092 10:59:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:03.351 10:59:33 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:21:03.351 10:59:33 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:03.351 10:59:33 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:21:03.351 10:59:33 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:03.351 10:59:33 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:21:03.352 10:59:33 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:03.352 10:59:33 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:21:03.352 10:59:33 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:03.352 10:59:33 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:03.352 10:59:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:03.611 [2024-07-25 10:59:33.223563] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:03.611 [2024-07-25 10:59:33.224224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5d4f0 (107): Transport endpoint is not connected 00:21:03.611 [2024-07-25 10:59:33.225214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5d4f0 (9): Bad file descriptor 00:21:03.611 [2024-07-25 10:59:33.226211] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:03.611 [2024-07-25 10:59:33.226239] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:03.611 [2024-07-25 10:59:33.226250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:03.611 request: 00:21:03.611 { 00:21:03.611 "name": "nvme0", 00:21:03.611 "trtype": "tcp", 00:21:03.611 "traddr": "127.0.0.1", 00:21:03.611 "adrfam": "ipv4", 00:21:03.611 "trsvcid": "4420", 00:21:03.611 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:03.611 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:03.611 "prchk_reftag": false, 00:21:03.611 "prchk_guard": false, 00:21:03.611 "hdgst": false, 00:21:03.611 "ddgst": false, 00:21:03.611 "psk": "key1", 00:21:03.611 "method": "bdev_nvme_attach_controller", 00:21:03.611 "req_id": 1 00:21:03.611 } 00:21:03.611 Got JSON-RPC error response 00:21:03.611 response: 00:21:03.611 { 00:21:03.611 "code": -5, 00:21:03.611 "message": "Input/output error" 00:21:03.611 } 00:21:03.611 10:59:33 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:21:03.611 10:59:33 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:03.611 10:59:33 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:03.611 10:59:33 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:03.611 10:59:33 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:21:03.611 10:59:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:03.611 10:59:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:03.611 10:59:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:03.611 10:59:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:03.611 10:59:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:03.869 10:59:33 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:21:03.869 10:59:33 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:21:03.869 10:59:33 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:03.869 10:59:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:03.869 10:59:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:03.869 10:59:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:03.869 10:59:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:04.127 10:59:33 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:21:04.127 10:59:33 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:21:04.127 10:59:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:04.385 10:59:34 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:21:04.385 10:59:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:21:04.644 10:59:34 keyring_file -- keyring/file.sh@77 -- # jq length 00:21:04.644 10:59:34 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:21:04.644 10:59:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:04.902 10:59:34 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:21:04.902 10:59:34 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.0FTktTBPZO 00:21:04.902 10:59:34 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.0FTktTBPZO 00:21:04.902 10:59:34 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:21:04.902 10:59:34 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.0FTktTBPZO 00:21:04.902 10:59:34 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:21:04.902 10:59:34 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:04.903 10:59:34 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:21:04.903 10:59:34 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:04.903 10:59:34 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.0FTktTBPZO 00:21:04.903 10:59:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.0FTktTBPZO 00:21:05.161 [2024-07-25 10:59:34.840634] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.0FTktTBPZO': 0100660 00:21:05.161 [2024-07-25 10:59:34.840712] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:05.161 request: 00:21:05.161 { 00:21:05.161 "name": "key0", 00:21:05.161 "path": "/tmp/tmp.0FTktTBPZO", 00:21:05.161 "method": "keyring_file_add_key", 00:21:05.161 "req_id": 1 00:21:05.161 } 00:21:05.161 Got JSON-RPC error response 00:21:05.161 response: 00:21:05.161 { 00:21:05.161 "code": -1, 00:21:05.161 "message": "Operation not permitted" 00:21:05.161 } 00:21:05.161 10:59:34 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:21:05.161 10:59:34 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:05.161 10:59:34 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:05.161 10:59:34 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:05.161 10:59:34 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.0FTktTBPZO 00:21:05.161 10:59:34 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.0FTktTBPZO 00:21:05.161 10:59:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.0FTktTBPZO 00:21:05.420 10:59:35 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.0FTktTBPZO 00:21:05.420 10:59:35 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:21:05.420 10:59:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:05.420 10:59:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:05.420 10:59:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:05.420 10:59:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:05.420 10:59:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:05.679 10:59:35 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:21:05.679 10:59:35 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:05.679 10:59:35 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:21:05.679 10:59:35 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:05.679 10:59:35 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:21:05.679 10:59:35 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:05.679 10:59:35 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:21:05.679 10:59:35 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:05.679 10:59:35 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:05.679 10:59:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:05.939 [2024-07-25 10:59:35.592951] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.0FTktTBPZO': No such file or directory 00:21:05.939 [2024-07-25 10:59:35.593023] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:21:05.939 [2024-07-25 10:59:35.593055] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:21:05.939 [2024-07-25 10:59:35.593066] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:05.939 [2024-07-25 10:59:35.593077] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:21:05.939 request: 00:21:05.939 { 00:21:05.939 "name": "nvme0", 00:21:05.939 "trtype": "tcp", 00:21:05.939 "traddr": "127.0.0.1", 00:21:05.939 "adrfam": "ipv4", 00:21:05.939 "trsvcid": "4420", 00:21:05.939 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:05.939 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:05.939 "prchk_reftag": false, 00:21:05.939 "prchk_guard": false, 00:21:05.939 "hdgst": false, 00:21:05.939 "ddgst": false, 00:21:05.939 "psk": "key0", 00:21:05.939 "method": "bdev_nvme_attach_controller", 00:21:05.939 "req_id": 1 00:21:05.939 } 00:21:05.939 Got JSON-RPC error response 00:21:05.939 response: 00:21:05.939 { 00:21:05.939 "code": -19, 00:21:05.939 "message": "No such device" 00:21:05.939 } 00:21:05.939 10:59:35 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:21:05.939 10:59:35 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:05.939 10:59:35 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:05.939 10:59:35 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:05.939 10:59:35 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:21:05.939 10:59:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:06.198 10:59:35 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:06.198 10:59:35 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:06.198 10:59:35 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:06.198 10:59:35 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:06.198 10:59:35 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:06.198 10:59:35 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:06.198 10:59:35 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.SLnVnIoSJF 00:21:06.198 10:59:35 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:06.198 10:59:35 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:06.198 10:59:35 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:21:06.198 10:59:35 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:06.198 10:59:35 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:06.198 10:59:35 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:21:06.198 10:59:35 keyring_file -- nvmf/common.sh@705 -- # python - 00:21:06.456 10:59:35 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.SLnVnIoSJF 00:21:06.456 10:59:35 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.SLnVnIoSJF 00:21:06.456 10:59:35 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.SLnVnIoSJF 00:21:06.457 10:59:35 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.SLnVnIoSJF 00:21:06.457 10:59:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.SLnVnIoSJF 00:21:06.716 10:59:36 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:06.716 10:59:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:06.975 nvme0n1 00:21:06.975 10:59:36 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:21:06.975 10:59:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:06.975 10:59:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:06.976 10:59:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:06.976 10:59:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:06.976 10:59:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:07.234 10:59:36 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:21:07.235 10:59:36 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:21:07.235 10:59:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:07.493 10:59:37 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:21:07.493 10:59:37 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:21:07.493 10:59:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:07.493 10:59:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:07.493 10:59:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:07.753 10:59:37 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:21:07.753 10:59:37 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:21:07.753 10:59:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:07.753 10:59:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:07.753 10:59:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:07.753 10:59:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:07.753 10:59:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:08.011 10:59:37 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:21:08.011 10:59:37 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:08.011 10:59:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:08.579 10:59:38 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:21:08.579 10:59:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:08.579 10:59:38 keyring_file -- keyring/file.sh@104 -- # jq length 00:21:08.579 10:59:38 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:21:08.579 10:59:38 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.SLnVnIoSJF 00:21:08.579 10:59:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.SLnVnIoSJF 00:21:08.838 10:59:38 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.VoMbukonet 00:21:08.838 10:59:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.VoMbukonet 00:21:09.097 10:59:38 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:09.097 10:59:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:09.355 nvme0n1 00:21:09.355 10:59:39 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:21:09.355 10:59:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:21:09.924 10:59:39 keyring_file -- keyring/file.sh@112 -- # config='{ 00:21:09.924 "subsystems": [ 00:21:09.924 { 00:21:09.924 "subsystem": "keyring", 00:21:09.924 "config": [ 00:21:09.924 { 00:21:09.924 "method": "keyring_file_add_key", 00:21:09.924 "params": { 00:21:09.924 "name": "key0", 00:21:09.924 "path": "/tmp/tmp.SLnVnIoSJF" 00:21:09.924 } 00:21:09.924 }, 00:21:09.924 { 00:21:09.924 "method": "keyring_file_add_key", 00:21:09.924 "params": { 00:21:09.924 "name": "key1", 00:21:09.924 "path": "/tmp/tmp.VoMbukonet" 00:21:09.924 } 00:21:09.924 } 00:21:09.924 ] 00:21:09.924 }, 00:21:09.924 { 00:21:09.924 "subsystem": "iobuf", 00:21:09.924 "config": [ 00:21:09.924 { 00:21:09.924 "method": "iobuf_set_options", 00:21:09.924 "params": { 00:21:09.924 "small_pool_count": 8192, 00:21:09.924 "large_pool_count": 1024, 00:21:09.924 "small_bufsize": 8192, 00:21:09.924 "large_bufsize": 135168 00:21:09.924 } 00:21:09.924 } 00:21:09.924 ] 00:21:09.924 }, 00:21:09.924 { 00:21:09.924 "subsystem": "sock", 00:21:09.924 "config": [ 00:21:09.924 { 00:21:09.924 "method": "sock_set_default_impl", 00:21:09.924 "params": { 00:21:09.924 "impl_name": "uring" 00:21:09.924 } 00:21:09.924 }, 00:21:09.924 { 00:21:09.924 "method": "sock_impl_set_options", 00:21:09.924 "params": { 00:21:09.924 "impl_name": "ssl", 00:21:09.924 "recv_buf_size": 4096, 00:21:09.924 "send_buf_size": 4096, 00:21:09.924 "enable_recv_pipe": true, 00:21:09.924 "enable_quickack": false, 00:21:09.924 "enable_placement_id": 0, 00:21:09.924 "enable_zerocopy_send_server": true, 00:21:09.924 "enable_zerocopy_send_client": false, 00:21:09.924 "zerocopy_threshold": 0, 00:21:09.924 "tls_version": 0, 00:21:09.924 "enable_ktls": false 00:21:09.924 } 00:21:09.924 }, 00:21:09.924 { 00:21:09.924 "method": "sock_impl_set_options", 00:21:09.924 "params": { 00:21:09.924 "impl_name": "posix", 00:21:09.924 "recv_buf_size": 2097152, 00:21:09.924 "send_buf_size": 2097152, 00:21:09.924 "enable_recv_pipe": true, 00:21:09.924 "enable_quickack": false, 00:21:09.924 "enable_placement_id": 0, 00:21:09.924 "enable_zerocopy_send_server": true, 00:21:09.924 "enable_zerocopy_send_client": false, 00:21:09.924 "zerocopy_threshold": 0, 00:21:09.924 "tls_version": 0, 00:21:09.924 "enable_ktls": false 00:21:09.924 } 00:21:09.924 }, 00:21:09.924 { 00:21:09.924 "method": "sock_impl_set_options", 00:21:09.924 "params": { 00:21:09.924 "impl_name": "uring", 00:21:09.924 "recv_buf_size": 2097152, 00:21:09.924 "send_buf_size": 2097152, 00:21:09.924 "enable_recv_pipe": true, 00:21:09.924 "enable_quickack": false, 00:21:09.924 "enable_placement_id": 0, 00:21:09.924 "enable_zerocopy_send_server": false, 00:21:09.924 "enable_zerocopy_send_client": false, 00:21:09.924 "zerocopy_threshold": 0, 00:21:09.924 "tls_version": 0, 00:21:09.924 "enable_ktls": false 00:21:09.924 } 00:21:09.924 } 00:21:09.924 ] 00:21:09.924 }, 00:21:09.924 { 00:21:09.924 "subsystem": "vmd", 00:21:09.924 "config": [] 00:21:09.924 }, 00:21:09.924 { 00:21:09.924 "subsystem": "accel", 00:21:09.924 "config": [ 00:21:09.924 { 00:21:09.924 "method": "accel_set_options", 00:21:09.924 "params": { 00:21:09.924 "small_cache_size": 128, 00:21:09.924 "large_cache_size": 16, 00:21:09.924 "task_count": 2048, 00:21:09.924 "sequence_count": 2048, 00:21:09.924 "buf_count": 2048 00:21:09.924 } 00:21:09.924 } 00:21:09.924 ] 00:21:09.924 }, 00:21:09.924 { 00:21:09.924 "subsystem": "bdev", 00:21:09.924 "config": [ 00:21:09.924 { 00:21:09.924 "method": "bdev_set_options", 00:21:09.924 "params": { 00:21:09.924 "bdev_io_pool_size": 65535, 00:21:09.924 "bdev_io_cache_size": 256, 00:21:09.924 "bdev_auto_examine": true, 00:21:09.924 "iobuf_small_cache_size": 128, 00:21:09.924 "iobuf_large_cache_size": 16 00:21:09.924 } 00:21:09.924 }, 00:21:09.924 { 00:21:09.924 "method": "bdev_raid_set_options", 00:21:09.924 "params": { 00:21:09.924 "process_window_size_kb": 1024, 00:21:09.924 "process_max_bandwidth_mb_sec": 0 00:21:09.924 } 00:21:09.924 }, 00:21:09.924 { 00:21:09.924 "method": "bdev_iscsi_set_options", 00:21:09.924 "params": { 00:21:09.924 "timeout_sec": 30 00:21:09.924 } 00:21:09.924 }, 00:21:09.924 { 00:21:09.924 "method": "bdev_nvme_set_options", 00:21:09.924 "params": { 00:21:09.925 "action_on_timeout": "none", 00:21:09.925 "timeout_us": 0, 00:21:09.925 "timeout_admin_us": 0, 00:21:09.925 "keep_alive_timeout_ms": 10000, 00:21:09.925 "arbitration_burst": 0, 00:21:09.925 "low_priority_weight": 0, 00:21:09.925 "medium_priority_weight": 0, 00:21:09.925 "high_priority_weight": 0, 00:21:09.925 "nvme_adminq_poll_period_us": 10000, 00:21:09.925 "nvme_ioq_poll_period_us": 0, 00:21:09.925 "io_queue_requests": 512, 00:21:09.925 "delay_cmd_submit": true, 00:21:09.925 "transport_retry_count": 4, 00:21:09.925 "bdev_retry_count": 3, 00:21:09.925 "transport_ack_timeout": 0, 00:21:09.925 "ctrlr_loss_timeout_sec": 0, 00:21:09.925 "reconnect_delay_sec": 0, 00:21:09.925 "fast_io_fail_timeout_sec": 0, 00:21:09.925 "disable_auto_failback": false, 00:21:09.925 "generate_uuids": false, 00:21:09.925 "transport_tos": 0, 00:21:09.925 "nvme_error_stat": false, 00:21:09.925 "rdma_srq_size": 0, 00:21:09.925 "io_path_stat": false, 00:21:09.925 "allow_accel_sequence": false, 00:21:09.925 "rdma_max_cq_size": 0, 00:21:09.925 "rdma_cm_event_timeout_ms": 0, 00:21:09.925 "dhchap_digests": [ 00:21:09.925 "sha256", 00:21:09.925 "sha384", 00:21:09.925 "sha512" 00:21:09.925 ], 00:21:09.925 "dhchap_dhgroups": [ 00:21:09.925 "null", 00:21:09.925 "ffdhe2048", 00:21:09.925 "ffdhe3072", 00:21:09.925 "ffdhe4096", 00:21:09.925 "ffdhe6144", 00:21:09.925 "ffdhe8192" 00:21:09.925 ] 00:21:09.925 } 00:21:09.925 }, 00:21:09.925 { 00:21:09.925 "method": "bdev_nvme_attach_controller", 00:21:09.925 "params": { 00:21:09.925 "name": "nvme0", 00:21:09.925 "trtype": "TCP", 00:21:09.925 "adrfam": "IPv4", 00:21:09.925 "traddr": "127.0.0.1", 00:21:09.925 "trsvcid": "4420", 00:21:09.925 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:09.925 "prchk_reftag": false, 00:21:09.925 "prchk_guard": false, 00:21:09.925 "ctrlr_loss_timeout_sec": 0, 00:21:09.925 "reconnect_delay_sec": 0, 00:21:09.925 "fast_io_fail_timeout_sec": 0, 00:21:09.925 "psk": "key0", 00:21:09.925 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:09.925 "hdgst": false, 00:21:09.925 "ddgst": false 00:21:09.925 } 00:21:09.925 }, 00:21:09.925 { 00:21:09.925 "method": "bdev_nvme_set_hotplug", 00:21:09.925 "params": { 00:21:09.925 "period_us": 100000, 00:21:09.925 "enable": false 00:21:09.925 } 00:21:09.925 }, 00:21:09.925 { 00:21:09.925 "method": "bdev_wait_for_examine" 00:21:09.925 } 00:21:09.925 ] 00:21:09.925 }, 00:21:09.925 { 00:21:09.925 "subsystem": "nbd", 00:21:09.925 "config": [] 00:21:09.925 } 00:21:09.925 ] 00:21:09.925 }' 00:21:09.925 10:59:39 keyring_file -- keyring/file.sh@114 -- # killprocess 84697 00:21:09.925 10:59:39 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 84697 ']' 00:21:09.925 10:59:39 keyring_file -- common/autotest_common.sh@954 -- # kill -0 84697 00:21:09.925 10:59:39 keyring_file -- common/autotest_common.sh@955 -- # uname 00:21:09.925 10:59:39 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:09.925 10:59:39 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84697 00:21:09.925 10:59:39 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:09.925 10:59:39 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:09.925 10:59:39 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84697' 00:21:09.925 killing process with pid 84697 00:21:09.925 Received shutdown signal, test time was about 1.000000 seconds 00:21:09.925 00:21:09.925 Latency(us) 00:21:09.925 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:09.925 =================================================================================================================== 00:21:09.925 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:09.925 10:59:39 keyring_file -- common/autotest_common.sh@969 -- # kill 84697 00:21:09.925 10:59:39 keyring_file -- common/autotest_common.sh@974 -- # wait 84697 00:21:10.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:10.185 10:59:39 keyring_file -- keyring/file.sh@117 -- # bperfpid=84950 00:21:10.185 10:59:39 keyring_file -- keyring/file.sh@119 -- # waitforlisten 84950 /var/tmp/bperf.sock 00:21:10.185 10:59:39 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:21:10.185 10:59:39 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 84950 ']' 00:21:10.185 10:59:39 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:10.185 10:59:39 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:10.185 10:59:39 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:21:10.185 "subsystems": [ 00:21:10.185 { 00:21:10.185 "subsystem": "keyring", 00:21:10.185 "config": [ 00:21:10.185 { 00:21:10.185 "method": "keyring_file_add_key", 00:21:10.185 "params": { 00:21:10.185 "name": "key0", 00:21:10.185 "path": "/tmp/tmp.SLnVnIoSJF" 00:21:10.185 } 00:21:10.185 }, 00:21:10.185 { 00:21:10.185 "method": "keyring_file_add_key", 00:21:10.185 "params": { 00:21:10.185 "name": "key1", 00:21:10.185 "path": "/tmp/tmp.VoMbukonet" 00:21:10.185 } 00:21:10.185 } 00:21:10.185 ] 00:21:10.185 }, 00:21:10.185 { 00:21:10.185 "subsystem": "iobuf", 00:21:10.185 "config": [ 00:21:10.185 { 00:21:10.185 "method": "iobuf_set_options", 00:21:10.185 "params": { 00:21:10.185 "small_pool_count": 8192, 00:21:10.185 "large_pool_count": 1024, 00:21:10.185 "small_bufsize": 8192, 00:21:10.185 "large_bufsize": 135168 00:21:10.185 } 00:21:10.185 } 00:21:10.185 ] 00:21:10.185 }, 00:21:10.185 { 00:21:10.185 "subsystem": "sock", 00:21:10.185 "config": [ 00:21:10.185 { 00:21:10.185 "method": "sock_set_default_impl", 00:21:10.185 "params": { 00:21:10.185 "impl_name": "uring" 00:21:10.185 } 00:21:10.185 }, 00:21:10.185 { 00:21:10.185 "method": "sock_impl_set_options", 00:21:10.185 "params": { 00:21:10.185 "impl_name": "ssl", 00:21:10.185 "recv_buf_size": 4096, 00:21:10.185 "send_buf_size": 4096, 00:21:10.185 "enable_recv_pipe": true, 00:21:10.185 "enable_quickack": false, 00:21:10.185 "enable_placement_id": 0, 00:21:10.185 "enable_zerocopy_send_server": true, 00:21:10.185 "enable_zerocopy_send_client": false, 00:21:10.185 "zerocopy_threshold": 0, 00:21:10.185 "tls_version": 0, 00:21:10.185 "enable_ktls": false 00:21:10.185 } 00:21:10.185 }, 00:21:10.185 { 00:21:10.185 "method": "sock_impl_set_options", 00:21:10.185 "params": { 00:21:10.185 "impl_name": "posix", 00:21:10.185 "recv_buf_size": 2097152, 00:21:10.185 "send_buf_size": 2097152, 00:21:10.185 "enable_recv_pipe": true, 00:21:10.185 "enable_quickack": false, 00:21:10.185 "enable_placement_id": 0, 00:21:10.185 "enable_zerocopy_send_server": true, 00:21:10.185 "enable_zerocopy_send_client": false, 00:21:10.185 "zerocopy_threshold": 0, 00:21:10.185 "tls_version": 0, 00:21:10.185 "enable_ktls": false 00:21:10.185 } 00:21:10.185 }, 00:21:10.185 { 00:21:10.185 "method": "sock_impl_set_options", 00:21:10.185 "params": { 00:21:10.185 "impl_name": "uring", 00:21:10.185 "recv_buf_size": 2097152, 00:21:10.185 "send_buf_size": 2097152, 00:21:10.185 "enable_recv_pipe": true, 00:21:10.185 "enable_quickack": false, 00:21:10.185 "enable_placement_id": 0, 00:21:10.185 "enable_zerocopy_send_server": false, 00:21:10.185 "enable_zerocopy_send_client": false, 00:21:10.185 "zerocopy_threshold": 0, 00:21:10.185 "tls_version": 0, 00:21:10.185 "enable_ktls": false 00:21:10.185 } 00:21:10.185 } 00:21:10.185 ] 00:21:10.185 }, 00:21:10.185 { 00:21:10.185 "subsystem": "vmd", 00:21:10.185 "config": [] 00:21:10.185 }, 00:21:10.185 { 00:21:10.185 "subsystem": "accel", 00:21:10.185 "config": [ 00:21:10.185 { 00:21:10.185 "method": "accel_set_options", 00:21:10.185 "params": { 00:21:10.185 "small_cache_size": 128, 00:21:10.185 "large_cache_size": 16, 00:21:10.185 "task_count": 2048, 00:21:10.185 "sequence_count": 2048, 00:21:10.185 "buf_count": 2048 00:21:10.185 } 00:21:10.185 } 00:21:10.185 ] 00:21:10.185 }, 00:21:10.185 { 00:21:10.185 "subsystem": "bdev", 00:21:10.185 "config": [ 00:21:10.185 { 00:21:10.185 "method": "bdev_set_options", 00:21:10.185 "params": { 00:21:10.185 "bdev_io_pool_size": 65535, 00:21:10.185 "bdev_io_cache_size": 256, 00:21:10.185 "bdev_auto_examine": true, 00:21:10.185 "iobuf_small_cache_size": 128, 00:21:10.185 "iobuf_large_cache_size": 16 00:21:10.185 } 00:21:10.185 }, 00:21:10.185 { 00:21:10.185 "method": "bdev_raid_set_options", 00:21:10.185 "params": { 00:21:10.185 "process_window_size_kb": 1024, 00:21:10.185 "process_max_bandwidth_mb_sec": 0 00:21:10.186 } 00:21:10.186 }, 00:21:10.186 { 00:21:10.186 "method": "bdev_iscsi_set_options", 00:21:10.186 "params": { 00:21:10.186 "timeout_sec": 30 00:21:10.186 } 00:21:10.186 }, 00:21:10.186 { 00:21:10.186 "method": "bdev_nvme_set_options", 00:21:10.186 "params": { 00:21:10.186 "action_on_timeout": "none", 00:21:10.186 "timeout_us": 0, 00:21:10.186 "timeout_admin_us": 0, 00:21:10.186 "keep_alive_timeout_ms": 10000, 00:21:10.186 "arbitration_burst": 0, 00:21:10.186 "low_priority_weight": 0, 00:21:10.186 "medium_priority_weight": 0, 00:21:10.186 "high_priority_weight": 0, 00:21:10.186 "nvme_adminq_poll_period_us": 10000, 00:21:10.186 "nvme_ioq_poll_period_us": 0, 00:21:10.186 "io_queue_requests": 512, 00:21:10.186 "delay_cmd_submit": true, 00:21:10.186 "transport_retry_count": 4, 00:21:10.186 "bdev_retry_count": 3, 00:21:10.186 "transport_ack_timeout": 0, 00:21:10.186 "ctrlr_loss_timeout_sec": 0, 00:21:10.186 "reconnect_delay_sec": 0, 00:21:10.186 "fast_io_fail_timeout_sec": 0, 00:21:10.186 "disable_auto_failback": false, 00:21:10.186 "generate_uuids": false, 00:21:10.186 "transport_tos": 0, 00:21:10.186 "nvme_error_stat": false, 00:21:10.186 "rdma_srq_size": 0, 00:21:10.186 "io_path_stat": false, 00:21:10.186 "allow_accel_sequence": false, 00:21:10.186 "rdma_max_cq_size": 0, 00:21:10.186 "rdma_cm_event_timeout_ms": 0, 00:21:10.186 "dhchap_digests": [ 00:21:10.186 "sha256", 00:21:10.186 "sha384", 00:21:10.186 "sha512" 00:21:10.186 ], 00:21:10.186 "dhchap_dhgroups": [ 00:21:10.186 "null", 00:21:10.186 "ffdhe2048", 00:21:10.186 "ffdhe3072", 00:21:10.186 "ffdhe4096", 00:21:10.186 "ffdhe6144", 00:21:10.186 "ffdhe8192" 00:21:10.186 ] 00:21:10.186 } 00:21:10.186 }, 00:21:10.186 { 00:21:10.186 "method": "bdev_nvme_attach_controller", 00:21:10.186 "params": { 00:21:10.186 "name": "nvme0", 00:21:10.186 "trtype": "TCP", 00:21:10.186 "adrfam": "IPv4", 00:21:10.186 "traddr": "127.0.0.1", 00:21:10.186 "trsvcid": "4420", 00:21:10.186 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:10.186 "prchk_reftag": false, 00:21:10.186 "prchk_guard": false, 00:21:10.186 "ctrlr_loss_timeout_sec": 0, 00:21:10.186 "reconnect_delay_sec": 0, 00:21:10.186 "fast_io_fail_timeout_sec": 0, 00:21:10.186 "psk": "key0", 00:21:10.186 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:10.186 "hdgst": false, 00:21:10.186 "ddgst": false 00:21:10.186 } 00:21:10.186 }, 00:21:10.186 { 00:21:10.186 "method": "bdev_nvme_set_hotplug", 00:21:10.186 "params": { 00:21:10.186 "period_us": 100000, 00:21:10.186 "enable": false 00:21:10.186 } 00:21:10.186 }, 00:21:10.186 { 00:21:10.186 "method": "bdev_wait_for_examine" 00:21:10.186 } 00:21:10.186 ] 00:21:10.186 }, 00:21:10.186 { 00:21:10.186 "subsystem": "nbd", 00:21:10.186 "config": [] 00:21:10.186 } 00:21:10.186 ] 00:21:10.186 }' 00:21:10.186 10:59:39 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:10.186 10:59:39 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:10.186 10:59:39 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:10.186 [2024-07-25 10:59:39.710959] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:10.186 [2024-07-25 10:59:39.711302] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84950 ] 00:21:10.186 [2024-07-25 10:59:39.838530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.445 [2024-07-25 10:59:39.963709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:10.445 [2024-07-25 10:59:40.115341] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:10.445 [2024-07-25 10:59:40.177039] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:11.037 10:59:40 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:11.037 10:59:40 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:21:11.037 10:59:40 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:21:11.037 10:59:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:11.037 10:59:40 keyring_file -- keyring/file.sh@120 -- # jq length 00:21:11.296 10:59:40 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:21:11.296 10:59:40 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:21:11.296 10:59:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:11.296 10:59:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:11.296 10:59:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:11.296 10:59:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:11.296 10:59:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:11.554 10:59:41 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:21:11.554 10:59:41 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:21:11.554 10:59:41 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:11.554 10:59:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:11.554 10:59:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:11.554 10:59:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:11.554 10:59:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:11.812 10:59:41 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:21:11.812 10:59:41 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:21:11.812 10:59:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:21:11.812 10:59:41 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:21:12.070 10:59:41 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:21:12.070 10:59:41 keyring_file -- keyring/file.sh@1 -- # cleanup 00:21:12.070 10:59:41 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.SLnVnIoSJF /tmp/tmp.VoMbukonet 00:21:12.070 10:59:41 keyring_file -- keyring/file.sh@20 -- # killprocess 84950 00:21:12.070 10:59:41 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 84950 ']' 00:21:12.070 10:59:41 keyring_file -- common/autotest_common.sh@954 -- # kill -0 84950 00:21:12.070 10:59:41 keyring_file -- common/autotest_common.sh@955 -- # uname 00:21:12.070 10:59:41 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:12.070 10:59:41 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84950 00:21:12.070 killing process with pid 84950 00:21:12.070 Received shutdown signal, test time was about 1.000000 seconds 00:21:12.070 00:21:12.070 Latency(us) 00:21:12.070 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.070 =================================================================================================================== 00:21:12.070 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:12.070 10:59:41 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:12.070 10:59:41 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:12.070 10:59:41 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84950' 00:21:12.070 10:59:41 keyring_file -- common/autotest_common.sh@969 -- # kill 84950 00:21:12.070 10:59:41 keyring_file -- common/autotest_common.sh@974 -- # wait 84950 00:21:12.328 10:59:42 keyring_file -- keyring/file.sh@21 -- # killprocess 84684 00:21:12.328 10:59:42 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 84684 ']' 00:21:12.328 10:59:42 keyring_file -- common/autotest_common.sh@954 -- # kill -0 84684 00:21:12.328 10:59:42 keyring_file -- common/autotest_common.sh@955 -- # uname 00:21:12.328 10:59:42 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:12.328 10:59:42 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84684 00:21:12.328 killing process with pid 84684 00:21:12.328 10:59:42 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:12.328 10:59:42 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:12.328 10:59:42 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84684' 00:21:12.328 10:59:42 keyring_file -- common/autotest_common.sh@969 -- # kill 84684 00:21:12.328 [2024-07-25 10:59:42.050503] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:12.328 10:59:42 keyring_file -- common/autotest_common.sh@974 -- # wait 84684 00:21:12.895 00:21:12.895 real 0m15.843s 00:21:12.895 user 0m39.092s 00:21:12.895 sys 0m3.313s 00:21:12.895 10:59:42 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:12.895 ************************************ 00:21:12.895 END TEST keyring_file 00:21:12.895 ************************************ 00:21:12.895 10:59:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:12.895 10:59:42 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:21:12.895 10:59:42 -- spdk/autotest.sh@301 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:21:12.895 10:59:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:12.895 10:59:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:12.895 10:59:42 -- common/autotest_common.sh@10 -- # set +x 00:21:12.895 ************************************ 00:21:12.895 START TEST keyring_linux 00:21:12.895 ************************************ 00:21:12.895 10:59:42 keyring_linux -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:21:12.895 * Looking for test storage... 00:21:12.895 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:12.895 10:59:42 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:12.895 10:59:42 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:12.895 10:59:42 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:21:12.895 10:59:42 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:12.895 10:59:42 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:12.895 10:59:42 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:12.895 10:59:42 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:12.895 10:59:42 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:12.896 10:59:42 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:12.896 10:59:42 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:12.896 10:59:42 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:12.896 10:59:42 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:12.896 10:59:42 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:12.896 10:59:42 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bb4b8bd3-cfb4-4368-bf29-91254747069c 00:21:12.896 10:59:42 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=bb4b8bd3-cfb4-4368-bf29-91254747069c 00:21:12.896 10:59:42 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:12.896 10:59:42 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:12.896 10:59:42 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:12.896 10:59:42 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:12.896 10:59:42 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:12.896 10:59:42 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:12.896 10:59:42 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:12.896 10:59:42 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:12.896 10:59:42 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.896 10:59:42 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.896 10:59:42 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.896 10:59:42 keyring_linux -- paths/export.sh@5 -- # export PATH 00:21:12.896 10:59:42 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.896 10:59:42 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:21:12.896 10:59:42 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:12.896 10:59:42 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:12.896 10:59:42 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:12.896 10:59:42 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:12.896 10:59:42 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:12.896 10:59:42 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:12.896 10:59:42 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:12.896 10:59:42 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:12.896 10:59:42 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:12.896 10:59:42 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:12.896 10:59:42 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:12.896 10:59:42 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:21:12.896 10:59:42 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:21:12.896 10:59:42 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:21:12.896 10:59:42 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:21:12.896 10:59:42 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:21:12.896 10:59:42 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:21:12.896 10:59:42 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:12.896 10:59:42 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:21:12.896 10:59:42 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:21:12.896 10:59:42 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:12.896 10:59:42 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:12.896 10:59:42 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:21:12.896 10:59:42 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:12.896 10:59:42 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:12.896 10:59:42 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:21:12.896 10:59:42 keyring_linux -- nvmf/common.sh@705 -- # python - 00:21:13.155 10:59:42 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:21:13.155 /tmp/:spdk-test:key0 00:21:13.155 10:59:42 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:21:13.155 10:59:42 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:21:13.155 10:59:42 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:21:13.155 10:59:42 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:21:13.155 10:59:42 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:13.155 10:59:42 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:21:13.155 10:59:42 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:21:13.155 10:59:42 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:13.155 10:59:42 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:13.155 10:59:42 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:21:13.155 10:59:42 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:13.155 10:59:42 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:21:13.155 10:59:42 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:21:13.155 10:59:42 keyring_linux -- nvmf/common.sh@705 -- # python - 00:21:13.155 10:59:42 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:21:13.155 /tmp/:spdk-test:key1 00:21:13.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.155 10:59:42 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:21:13.155 10:59:42 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85064 00:21:13.155 10:59:42 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:13.155 10:59:42 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85064 00:21:13.155 10:59:42 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 85064 ']' 00:21:13.155 10:59:42 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.155 10:59:42 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:13.155 10:59:42 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.155 10:59:42 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:13.155 10:59:42 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:13.155 [2024-07-25 10:59:42.760298] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:13.155 [2024-07-25 10:59:42.760605] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85064 ] 00:21:13.414 [2024-07-25 10:59:42.898676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.414 [2024-07-25 10:59:43.007270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.414 [2024-07-25 10:59:43.065258] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:14.349 10:59:43 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:14.349 10:59:43 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:21:14.349 10:59:43 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:21:14.349 10:59:43 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.349 10:59:43 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:14.349 [2024-07-25 10:59:43.762693] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:14.349 null0 00:21:14.349 [2024-07-25 10:59:43.798613] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:14.349 [2024-07-25 10:59:43.798908] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:14.349 10:59:43 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.349 10:59:43 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:21:14.349 826923943 00:21:14.349 10:59:43 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:21:14.349 311922354 00:21:14.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:14.349 10:59:43 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85082 00:21:14.349 10:59:43 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85082 /var/tmp/bperf.sock 00:21:14.349 10:59:43 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:21:14.349 10:59:43 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 85082 ']' 00:21:14.349 10:59:43 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:14.349 10:59:43 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:14.349 10:59:43 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:14.349 10:59:43 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:14.349 10:59:43 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:14.349 [2024-07-25 10:59:43.880423] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:14.349 [2024-07-25 10:59:43.880815] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85082 ] 00:21:14.349 [2024-07-25 10:59:44.020890] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.608 [2024-07-25 10:59:44.158173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:15.174 10:59:44 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:15.174 10:59:44 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:21:15.174 10:59:44 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:21:15.174 10:59:44 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:21:15.433 10:59:45 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:21:15.433 10:59:45 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:15.691 [2024-07-25 10:59:45.351027] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:15.691 10:59:45 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:21:15.691 10:59:45 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:21:16.258 [2024-07-25 10:59:45.692723] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:16.258 nvme0n1 00:21:16.258 10:59:45 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:21:16.258 10:59:45 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:21:16.258 10:59:45 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:21:16.258 10:59:45 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:21:16.258 10:59:45 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:21:16.258 10:59:45 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:16.517 10:59:46 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:21:16.517 10:59:46 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:21:16.517 10:59:46 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:21:16.517 10:59:46 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:21:16.517 10:59:46 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:21:16.517 10:59:46 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:16.517 10:59:46 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:16.774 10:59:46 keyring_linux -- keyring/linux.sh@25 -- # sn=826923943 00:21:16.774 10:59:46 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:21:16.774 10:59:46 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:21:16.774 10:59:46 keyring_linux -- keyring/linux.sh@26 -- # [[ 826923943 == \8\2\6\9\2\3\9\4\3 ]] 00:21:16.774 10:59:46 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 826923943 00:21:16.774 10:59:46 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:21:16.775 10:59:46 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:16.775 Running I/O for 1 seconds... 00:21:17.710 00:21:17.710 Latency(us) 00:21:17.710 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.710 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:17.710 nvme0n1 : 1.01 12104.42 47.28 0.00 0.00 10513.69 3753.43 14179.61 00:21:17.710 =================================================================================================================== 00:21:17.710 Total : 12104.42 47.28 0.00 0.00 10513.69 3753.43 14179.61 00:21:17.710 0 00:21:17.710 10:59:47 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:17.710 10:59:47 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:18.277 10:59:47 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:21:18.277 10:59:47 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:21:18.277 10:59:47 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:21:18.277 10:59:47 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:21:18.277 10:59:47 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:18.277 10:59:47 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:21:18.277 10:59:47 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:21:18.277 10:59:47 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:21:18.277 10:59:47 keyring_linux -- keyring/linux.sh@23 -- # return 00:21:18.277 10:59:47 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:18.277 10:59:47 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:21:18.277 10:59:47 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:18.277 10:59:47 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:21:18.277 10:59:47 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:18.277 10:59:47 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:21:18.277 10:59:47 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:18.277 10:59:47 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:18.277 10:59:47 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:18.537 [2024-07-25 10:59:48.171329] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:18.537 [2024-07-25 10:59:48.171517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xebc460 (107): Transport endpoint is not connected 00:21:18.537 [2024-07-25 10:59:48.172502] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xebc460 (9): Bad file descriptor 00:21:18.537 [2024-07-25 10:59:48.173499] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:18.537 [2024-07-25 10:59:48.173523] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:18.537 [2024-07-25 10:59:48.173533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:18.537 request: 00:21:18.537 { 00:21:18.537 "name": "nvme0", 00:21:18.537 "trtype": "tcp", 00:21:18.537 "traddr": "127.0.0.1", 00:21:18.537 "adrfam": "ipv4", 00:21:18.537 "trsvcid": "4420", 00:21:18.537 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:18.537 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:18.537 "prchk_reftag": false, 00:21:18.537 "prchk_guard": false, 00:21:18.537 "hdgst": false, 00:21:18.537 "ddgst": false, 00:21:18.537 "psk": ":spdk-test:key1", 00:21:18.537 "method": "bdev_nvme_attach_controller", 00:21:18.537 "req_id": 1 00:21:18.537 } 00:21:18.537 Got JSON-RPC error response 00:21:18.537 response: 00:21:18.537 { 00:21:18.537 "code": -5, 00:21:18.537 "message": "Input/output error" 00:21:18.537 } 00:21:18.537 10:59:48 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:21:18.537 10:59:48 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:18.537 10:59:48 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:18.537 10:59:48 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:18.537 10:59:48 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:21:18.537 10:59:48 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:21:18.537 10:59:48 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:21:18.537 10:59:48 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:21:18.537 10:59:48 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:21:18.537 10:59:48 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:21:18.537 10:59:48 keyring_linux -- keyring/linux.sh@33 -- # sn=826923943 00:21:18.537 10:59:48 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 826923943 00:21:18.537 1 links removed 00:21:18.537 10:59:48 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:21:18.537 10:59:48 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:21:18.537 10:59:48 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:21:18.537 10:59:48 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:21:18.537 10:59:48 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:21:18.537 10:59:48 keyring_linux -- keyring/linux.sh@33 -- # sn=311922354 00:21:18.537 10:59:48 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 311922354 00:21:18.537 1 links removed 00:21:18.537 10:59:48 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85082 00:21:18.537 10:59:48 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 85082 ']' 00:21:18.537 10:59:48 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 85082 00:21:18.537 10:59:48 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:21:18.537 10:59:48 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:18.537 10:59:48 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85082 00:21:18.537 killing process with pid 85082 00:21:18.537 Received shutdown signal, test time was about 1.000000 seconds 00:21:18.537 00:21:18.537 Latency(us) 00:21:18.537 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.537 =================================================================================================================== 00:21:18.537 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:18.537 10:59:48 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:18.537 10:59:48 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:18.537 10:59:48 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85082' 00:21:18.537 10:59:48 keyring_linux -- common/autotest_common.sh@969 -- # kill 85082 00:21:18.537 10:59:48 keyring_linux -- common/autotest_common.sh@974 -- # wait 85082 00:21:18.796 10:59:48 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85064 00:21:18.796 10:59:48 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 85064 ']' 00:21:18.796 10:59:48 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 85064 00:21:18.796 10:59:48 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:21:18.796 10:59:48 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:18.796 10:59:48 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85064 00:21:18.796 killing process with pid 85064 00:21:18.796 10:59:48 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:18.796 10:59:48 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:18.796 10:59:48 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85064' 00:21:18.796 10:59:48 keyring_linux -- common/autotest_common.sh@969 -- # kill 85064 00:21:18.796 10:59:48 keyring_linux -- common/autotest_common.sh@974 -- # wait 85064 00:21:19.364 00:21:19.364 real 0m6.392s 00:21:19.364 user 0m12.195s 00:21:19.364 sys 0m1.729s 00:21:19.364 10:59:48 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:19.364 10:59:48 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:19.364 ************************************ 00:21:19.364 END TEST keyring_linux 00:21:19.364 ************************************ 00:21:19.364 10:59:48 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:21:19.364 10:59:48 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:21:19.364 10:59:48 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:21:19.364 10:59:48 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:21:19.364 10:59:48 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:21:19.364 10:59:48 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:21:19.364 10:59:48 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:21:19.364 10:59:48 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:21:19.364 10:59:48 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:21:19.364 10:59:48 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:21:19.364 10:59:48 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:21:19.364 10:59:48 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:21:19.364 10:59:48 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:21:19.364 10:59:48 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:21:19.364 10:59:48 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:21:19.364 10:59:48 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:21:19.364 10:59:48 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:21:19.364 10:59:48 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:19.364 10:59:48 -- common/autotest_common.sh@10 -- # set +x 00:21:19.364 10:59:48 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:21:19.364 10:59:48 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:21:19.364 10:59:48 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:21:19.364 10:59:48 -- common/autotest_common.sh@10 -- # set +x 00:21:21.273 INFO: APP EXITING 00:21:21.273 INFO: killing all VMs 00:21:21.273 INFO: killing vhost app 00:21:21.273 INFO: EXIT DONE 00:21:21.531 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:21.790 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:21:21.791 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:21:22.358 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:22.358 Cleaning 00:21:22.358 Removing: /var/run/dpdk/spdk0/config 00:21:22.358 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:22.358 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:22.358 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:22.358 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:22.358 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:22.358 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:22.358 Removing: /var/run/dpdk/spdk1/config 00:21:22.358 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:21:22.358 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:21:22.358 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:21:22.358 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:21:22.358 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:21:22.358 Removing: /var/run/dpdk/spdk1/hugepage_info 00:21:22.358 Removing: /var/run/dpdk/spdk2/config 00:21:22.358 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:21:22.358 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:21:22.358 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:21:22.358 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:21:22.358 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:21:22.358 Removing: /var/run/dpdk/spdk2/hugepage_info 00:21:22.358 Removing: /var/run/dpdk/spdk3/config 00:21:22.358 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:21:22.358 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:21:22.358 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:21:22.358 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:21:22.358 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:21:22.358 Removing: /var/run/dpdk/spdk3/hugepage_info 00:21:22.358 Removing: /var/run/dpdk/spdk4/config 00:21:22.358 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:21:22.616 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:21:22.616 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:21:22.616 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:21:22.616 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:21:22.616 Removing: /var/run/dpdk/spdk4/hugepage_info 00:21:22.616 Removing: /dev/shm/nvmf_trace.0 00:21:22.616 Removing: /dev/shm/spdk_tgt_trace.pid58764 00:21:22.616 Removing: /var/run/dpdk/spdk0 00:21:22.616 Removing: /var/run/dpdk/spdk1 00:21:22.616 Removing: /var/run/dpdk/spdk2 00:21:22.616 Removing: /var/run/dpdk/spdk3 00:21:22.616 Removing: /var/run/dpdk/spdk4 00:21:22.616 Removing: /var/run/dpdk/spdk_pid58619 00:21:22.616 Removing: /var/run/dpdk/spdk_pid58764 00:21:22.616 Removing: /var/run/dpdk/spdk_pid58962 00:21:22.616 Removing: /var/run/dpdk/spdk_pid59043 00:21:22.616 Removing: /var/run/dpdk/spdk_pid59076 00:21:22.616 Removing: /var/run/dpdk/spdk_pid59184 00:21:22.616 Removing: /var/run/dpdk/spdk_pid59209 00:21:22.616 Removing: /var/run/dpdk/spdk_pid59327 00:21:22.616 Removing: /var/run/dpdk/spdk_pid59517 00:21:22.616 Removing: /var/run/dpdk/spdk_pid59663 00:21:22.616 Removing: /var/run/dpdk/spdk_pid59734 00:21:22.616 Removing: /var/run/dpdk/spdk_pid59810 00:21:22.616 Removing: /var/run/dpdk/spdk_pid59907 00:21:22.616 Removing: /var/run/dpdk/spdk_pid59978 00:21:22.616 Removing: /var/run/dpdk/spdk_pid60017 00:21:22.616 Removing: /var/run/dpdk/spdk_pid60052 00:21:22.616 Removing: /var/run/dpdk/spdk_pid60114 00:21:22.616 Removing: /var/run/dpdk/spdk_pid60213 00:21:22.616 Removing: /var/run/dpdk/spdk_pid60646 00:21:22.616 Removing: /var/run/dpdk/spdk_pid60698 00:21:22.616 Removing: /var/run/dpdk/spdk_pid60749 00:21:22.616 Removing: /var/run/dpdk/spdk_pid60765 00:21:22.616 Removing: /var/run/dpdk/spdk_pid60842 00:21:22.616 Removing: /var/run/dpdk/spdk_pid60859 00:21:22.616 Removing: /var/run/dpdk/spdk_pid60926 00:21:22.616 Removing: /var/run/dpdk/spdk_pid60942 00:21:22.616 Removing: /var/run/dpdk/spdk_pid60993 00:21:22.616 Removing: /var/run/dpdk/spdk_pid61011 00:21:22.616 Removing: /var/run/dpdk/spdk_pid61051 00:21:22.616 Removing: /var/run/dpdk/spdk_pid61075 00:21:22.616 Removing: /var/run/dpdk/spdk_pid61202 00:21:22.616 Removing: /var/run/dpdk/spdk_pid61233 00:21:22.616 Removing: /var/run/dpdk/spdk_pid61307 00:21:22.616 Removing: /var/run/dpdk/spdk_pid61623 00:21:22.616 Removing: /var/run/dpdk/spdk_pid61635 00:21:22.616 Removing: /var/run/dpdk/spdk_pid61671 00:21:22.616 Removing: /var/run/dpdk/spdk_pid61685 00:21:22.616 Removing: /var/run/dpdk/spdk_pid61706 00:21:22.616 Removing: /var/run/dpdk/spdk_pid61730 00:21:22.616 Removing: /var/run/dpdk/spdk_pid61744 00:21:22.616 Removing: /var/run/dpdk/spdk_pid61765 00:21:22.616 Removing: /var/run/dpdk/spdk_pid61789 00:21:22.616 Removing: /var/run/dpdk/spdk_pid61803 00:21:22.616 Removing: /var/run/dpdk/spdk_pid61824 00:21:22.616 Removing: /var/run/dpdk/spdk_pid61843 00:21:22.616 Removing: /var/run/dpdk/spdk_pid61862 00:21:22.616 Removing: /var/run/dpdk/spdk_pid61883 00:21:22.616 Removing: /var/run/dpdk/spdk_pid61904 00:21:22.616 Removing: /var/run/dpdk/spdk_pid61923 00:21:22.616 Removing: /var/run/dpdk/spdk_pid61944 00:21:22.616 Removing: /var/run/dpdk/spdk_pid61963 00:21:22.616 Removing: /var/run/dpdk/spdk_pid61982 00:21:22.616 Removing: /var/run/dpdk/spdk_pid62004 00:21:22.616 Removing: /var/run/dpdk/spdk_pid62039 00:21:22.616 Removing: /var/run/dpdk/spdk_pid62048 00:21:22.616 Removing: /var/run/dpdk/spdk_pid62083 00:21:22.616 Removing: /var/run/dpdk/spdk_pid62147 00:21:22.616 Removing: /var/run/dpdk/spdk_pid62181 00:21:22.616 Removing: /var/run/dpdk/spdk_pid62196 00:21:22.616 Removing: /var/run/dpdk/spdk_pid62229 00:21:22.616 Removing: /var/run/dpdk/spdk_pid62240 00:21:22.616 Removing: /var/run/dpdk/spdk_pid62249 00:21:22.616 Removing: /var/run/dpdk/spdk_pid62297 00:21:22.616 Removing: /var/run/dpdk/spdk_pid62315 00:21:22.616 Removing: /var/run/dpdk/spdk_pid62345 00:21:22.616 Removing: /var/run/dpdk/spdk_pid62354 00:21:22.616 Removing: /var/run/dpdk/spdk_pid62369 00:21:22.616 Removing: /var/run/dpdk/spdk_pid62379 00:21:22.616 Removing: /var/run/dpdk/spdk_pid62394 00:21:22.616 Removing: /var/run/dpdk/spdk_pid62403 00:21:22.617 Removing: /var/run/dpdk/spdk_pid62413 00:21:22.617 Removing: /var/run/dpdk/spdk_pid62428 00:21:22.617 Removing: /var/run/dpdk/spdk_pid62461 00:21:22.617 Removing: /var/run/dpdk/spdk_pid62488 00:21:22.617 Removing: /var/run/dpdk/spdk_pid62498 00:21:22.876 Removing: /var/run/dpdk/spdk_pid62532 00:21:22.876 Removing: /var/run/dpdk/spdk_pid62541 00:21:22.876 Removing: /var/run/dpdk/spdk_pid62549 00:21:22.876 Removing: /var/run/dpdk/spdk_pid62595 00:21:22.876 Removing: /var/run/dpdk/spdk_pid62612 00:21:22.876 Removing: /var/run/dpdk/spdk_pid62644 00:21:22.876 Removing: /var/run/dpdk/spdk_pid62646 00:21:22.876 Removing: /var/run/dpdk/spdk_pid62659 00:21:22.876 Removing: /var/run/dpdk/spdk_pid62672 00:21:22.876 Removing: /var/run/dpdk/spdk_pid62685 00:21:22.876 Removing: /var/run/dpdk/spdk_pid62687 00:21:22.876 Removing: /var/run/dpdk/spdk_pid62700 00:21:22.876 Removing: /var/run/dpdk/spdk_pid62713 00:21:22.876 Removing: /var/run/dpdk/spdk_pid62787 00:21:22.876 Removing: /var/run/dpdk/spdk_pid62840 00:21:22.876 Removing: /var/run/dpdk/spdk_pid62950 00:21:22.876 Removing: /var/run/dpdk/spdk_pid62989 00:21:22.876 Removing: /var/run/dpdk/spdk_pid63034 00:21:22.876 Removing: /var/run/dpdk/spdk_pid63054 00:21:22.876 Removing: /var/run/dpdk/spdk_pid63076 00:21:22.876 Removing: /var/run/dpdk/spdk_pid63095 00:21:22.876 Removing: /var/run/dpdk/spdk_pid63128 00:21:22.876 Removing: /var/run/dpdk/spdk_pid63149 00:21:22.876 Removing: /var/run/dpdk/spdk_pid63220 00:21:22.876 Removing: /var/run/dpdk/spdk_pid63242 00:21:22.876 Removing: /var/run/dpdk/spdk_pid63293 00:21:22.876 Removing: /var/run/dpdk/spdk_pid63362 00:21:22.876 Removing: /var/run/dpdk/spdk_pid63432 00:21:22.876 Removing: /var/run/dpdk/spdk_pid63463 00:21:22.876 Removing: /var/run/dpdk/spdk_pid63560 00:21:22.876 Removing: /var/run/dpdk/spdk_pid63608 00:21:22.876 Removing: /var/run/dpdk/spdk_pid63641 00:21:22.876 Removing: /var/run/dpdk/spdk_pid63865 00:21:22.876 Removing: /var/run/dpdk/spdk_pid63961 00:21:22.876 Removing: /var/run/dpdk/spdk_pid63991 00:21:22.876 Removing: /var/run/dpdk/spdk_pid64343 00:21:22.876 Removing: /var/run/dpdk/spdk_pid64386 00:21:22.876 Removing: /var/run/dpdk/spdk_pid64675 00:21:22.876 Removing: /var/run/dpdk/spdk_pid65079 00:21:22.876 Removing: /var/run/dpdk/spdk_pid65347 00:21:22.876 Removing: /var/run/dpdk/spdk_pid66125 00:21:22.876 Removing: /var/run/dpdk/spdk_pid66941 00:21:22.876 Removing: /var/run/dpdk/spdk_pid67057 00:21:22.876 Removing: /var/run/dpdk/spdk_pid67125 00:21:22.876 Removing: /var/run/dpdk/spdk_pid68391 00:21:22.876 Removing: /var/run/dpdk/spdk_pid68650 00:21:22.876 Removing: /var/run/dpdk/spdk_pid72032 00:21:22.876 Removing: /var/run/dpdk/spdk_pid72345 00:21:22.876 Removing: /var/run/dpdk/spdk_pid72453 00:21:22.876 Removing: /var/run/dpdk/spdk_pid72582 00:21:22.876 Removing: /var/run/dpdk/spdk_pid72610 00:21:22.876 Removing: /var/run/dpdk/spdk_pid72636 00:21:22.876 Removing: /var/run/dpdk/spdk_pid72665 00:21:22.876 Removing: /var/run/dpdk/spdk_pid72759 00:21:22.876 Removing: /var/run/dpdk/spdk_pid72892 00:21:22.876 Removing: /var/run/dpdk/spdk_pid73042 00:21:22.876 Removing: /var/run/dpdk/spdk_pid73128 00:21:22.876 Removing: /var/run/dpdk/spdk_pid73316 00:21:22.876 Removing: /var/run/dpdk/spdk_pid73399 00:21:22.876 Removing: /var/run/dpdk/spdk_pid73492 00:21:22.876 Removing: /var/run/dpdk/spdk_pid73799 00:21:22.876 Removing: /var/run/dpdk/spdk_pid74214 00:21:22.876 Removing: /var/run/dpdk/spdk_pid74216 00:21:22.876 Removing: /var/run/dpdk/spdk_pid74491 00:21:22.876 Removing: /var/run/dpdk/spdk_pid74505 00:21:22.876 Removing: /var/run/dpdk/spdk_pid74520 00:21:22.876 Removing: /var/run/dpdk/spdk_pid74562 00:21:22.876 Removing: /var/run/dpdk/spdk_pid74571 00:21:22.876 Removing: /var/run/dpdk/spdk_pid74884 00:21:22.876 Removing: /var/run/dpdk/spdk_pid74929 00:21:22.876 Removing: /var/run/dpdk/spdk_pid75215 00:21:22.876 Removing: /var/run/dpdk/spdk_pid75422 00:21:22.876 Removing: /var/run/dpdk/spdk_pid75802 00:21:22.876 Removing: /var/run/dpdk/spdk_pid76308 00:21:22.876 Removing: /var/run/dpdk/spdk_pid77126 00:21:22.876 Removing: /var/run/dpdk/spdk_pid77707 00:21:22.876 Removing: /var/run/dpdk/spdk_pid77715 00:21:22.876 Removing: /var/run/dpdk/spdk_pid79615 00:21:22.876 Removing: /var/run/dpdk/spdk_pid79675 00:21:22.876 Removing: /var/run/dpdk/spdk_pid79737 00:21:22.876 Removing: /var/run/dpdk/spdk_pid79796 00:21:22.876 Removing: /var/run/dpdk/spdk_pid79918 00:21:22.876 Removing: /var/run/dpdk/spdk_pid79978 00:21:22.876 Removing: /var/run/dpdk/spdk_pid80043 00:21:22.876 Removing: /var/run/dpdk/spdk_pid80099 00:21:23.135 Removing: /var/run/dpdk/spdk_pid80428 00:21:23.135 Removing: /var/run/dpdk/spdk_pid81594 00:21:23.135 Removing: /var/run/dpdk/spdk_pid81735 00:21:23.135 Removing: /var/run/dpdk/spdk_pid81978 00:21:23.135 Removing: /var/run/dpdk/spdk_pid82532 00:21:23.135 Removing: /var/run/dpdk/spdk_pid82692 00:21:23.135 Removing: /var/run/dpdk/spdk_pid82849 00:21:23.135 Removing: /var/run/dpdk/spdk_pid82946 00:21:23.135 Removing: /var/run/dpdk/spdk_pid83102 00:21:23.135 Removing: /var/run/dpdk/spdk_pid83211 00:21:23.135 Removing: /var/run/dpdk/spdk_pid83865 00:21:23.135 Removing: /var/run/dpdk/spdk_pid83905 00:21:23.135 Removing: /var/run/dpdk/spdk_pid83936 00:21:23.135 Removing: /var/run/dpdk/spdk_pid84190 00:21:23.135 Removing: /var/run/dpdk/spdk_pid84225 00:21:23.135 Removing: /var/run/dpdk/spdk_pid84255 00:21:23.135 Removing: /var/run/dpdk/spdk_pid84684 00:21:23.135 Removing: /var/run/dpdk/spdk_pid84697 00:21:23.135 Removing: /var/run/dpdk/spdk_pid84950 00:21:23.135 Removing: /var/run/dpdk/spdk_pid85064 00:21:23.135 Removing: /var/run/dpdk/spdk_pid85082 00:21:23.135 Clean 00:21:23.135 10:59:52 -- common/autotest_common.sh@1451 -- # return 0 00:21:23.135 10:59:52 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:21:23.135 10:59:52 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:23.135 10:59:52 -- common/autotest_common.sh@10 -- # set +x 00:21:23.135 10:59:52 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:21:23.135 10:59:52 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:23.135 10:59:52 -- common/autotest_common.sh@10 -- # set +x 00:21:23.135 10:59:52 -- spdk/autotest.sh@391 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:23.136 10:59:52 -- spdk/autotest.sh@393 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:21:23.136 10:59:52 -- spdk/autotest.sh@393 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:21:23.136 10:59:52 -- spdk/autotest.sh@395 -- # hash lcov 00:21:23.136 10:59:52 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:21:23.136 10:59:52 -- spdk/autotest.sh@397 -- # hostname 00:21:23.136 10:59:52 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:21:23.394 geninfo: WARNING: invalid characters removed from testname! 00:21:45.666 11:00:15 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:48.968 11:00:18 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:52.252 11:00:21 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:54.191 11:00:23 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:57.473 11:00:26 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:00.003 11:00:29 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:02.531 11:00:32 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:02.531 11:00:32 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:02.531 11:00:32 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:22:02.531 11:00:32 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:02.531 11:00:32 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:02.532 11:00:32 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.532 11:00:32 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.532 11:00:32 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.532 11:00:32 -- paths/export.sh@5 -- $ export PATH 00:22:02.532 11:00:32 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.532 11:00:32 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:22:02.532 11:00:32 -- common/autobuild_common.sh@447 -- $ date +%s 00:22:02.532 11:00:32 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721905232.XXXXXX 00:22:02.793 11:00:32 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721905232.ifbhI9 00:22:02.793 11:00:32 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:22:02.793 11:00:32 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:22:02.793 11:00:32 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:22:02.793 11:00:32 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:22:02.793 11:00:32 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:22:02.793 11:00:32 -- common/autobuild_common.sh@463 -- $ get_config_params 00:22:02.793 11:00:32 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:22:02.793 11:00:32 -- common/autotest_common.sh@10 -- $ set +x 00:22:02.793 11:00:32 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:22:02.793 11:00:32 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:22:02.793 11:00:32 -- pm/common@17 -- $ local monitor 00:22:02.793 11:00:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:02.793 11:00:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:02.793 11:00:32 -- pm/common@25 -- $ sleep 1 00:22:02.793 11:00:32 -- pm/common@21 -- $ date +%s 00:22:02.793 11:00:32 -- pm/common@21 -- $ date +%s 00:22:02.793 11:00:32 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721905232 00:22:02.793 11:00:32 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721905232 00:22:02.793 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721905232_collect-vmstat.pm.log 00:22:02.793 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721905232_collect-cpu-load.pm.log 00:22:03.727 11:00:33 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:22:03.727 11:00:33 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:22:03.727 11:00:33 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:22:03.727 11:00:33 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:22:03.727 11:00:33 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:22:03.727 11:00:33 -- spdk/autopackage.sh@19 -- $ timing_finish 00:22:03.727 11:00:33 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:03.727 11:00:33 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:22:03.727 11:00:33 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:03.727 11:00:33 -- spdk/autopackage.sh@20 -- $ exit 0 00:22:03.727 11:00:33 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:22:03.727 11:00:33 -- pm/common@29 -- $ signal_monitor_resources TERM 00:22:03.727 11:00:33 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:22:03.727 11:00:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:03.727 11:00:33 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:22:03.727 11:00:33 -- pm/common@44 -- $ pid=86785 00:22:03.727 11:00:33 -- pm/common@50 -- $ kill -TERM 86785 00:22:03.727 11:00:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:03.727 11:00:33 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:22:03.727 11:00:33 -- pm/common@44 -- $ pid=86786 00:22:03.727 11:00:33 -- pm/common@50 -- $ kill -TERM 86786 00:22:03.727 + [[ -n 5113 ]] 00:22:03.727 + sudo kill 5113 00:22:03.738 [Pipeline] } 00:22:03.815 [Pipeline] // timeout 00:22:03.822 [Pipeline] } 00:22:03.833 [Pipeline] // stage 00:22:03.838 [Pipeline] } 00:22:03.850 [Pipeline] // catchError 00:22:03.857 [Pipeline] stage 00:22:03.859 [Pipeline] { (Stop VM) 00:22:03.868 [Pipeline] sh 00:22:04.145 + vagrant halt 00:22:07.429 ==> default: Halting domain... 00:22:14.000 [Pipeline] sh 00:22:14.278 + vagrant destroy -f 00:22:17.635 ==> default: Removing domain... 00:22:18.216 [Pipeline] sh 00:22:18.499 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:22:18.508 [Pipeline] } 00:22:18.528 [Pipeline] // stage 00:22:18.536 [Pipeline] } 00:22:18.554 [Pipeline] // dir 00:22:18.560 [Pipeline] } 00:22:18.574 [Pipeline] // wrap 00:22:18.581 [Pipeline] } 00:22:18.593 [Pipeline] // catchError 00:22:18.602 [Pipeline] stage 00:22:18.604 [Pipeline] { (Epilogue) 00:22:18.616 [Pipeline] sh 00:22:18.893 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:25.470 [Pipeline] catchError 00:22:25.472 [Pipeline] { 00:22:25.485 [Pipeline] sh 00:22:25.764 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:25.765 Artifacts sizes are good 00:22:25.772 [Pipeline] } 00:22:25.785 [Pipeline] // catchError 00:22:25.794 [Pipeline] archiveArtifacts 00:22:25.800 Archiving artifacts 00:22:25.969 [Pipeline] cleanWs 00:22:25.979 [WS-CLEANUP] Deleting project workspace... 00:22:25.979 [WS-CLEANUP] Deferred wipeout is used... 00:22:25.988 [WS-CLEANUP] done 00:22:25.990 [Pipeline] } 00:22:26.005 [Pipeline] // stage 00:22:26.010 [Pipeline] } 00:22:26.026 [Pipeline] // node 00:22:26.032 [Pipeline] End of Pipeline 00:22:26.060 Finished: SUCCESS